I am trying to add versioning to my current Python project. I have installed Versioneer and setup the setup.cfg file to the best of my understanding of the documentation. When I run versioneer install I receive an error stating it cannot find the _version.py file. Below is the copy of the file and error I receive.
[versioneer]
VCS = git
style = pep440
versionfile_source = hw-assesment-tool/_version.py
versionfile_build = hw-assesment-tool/_version.py
tag_prefix =
parentdir_prefix =
and this is the error I get:
FileNotFoundError: [Errno 2] No such file or directory: 'hw-assesment-tool/_version.py'
I had that problem with a similar setup.cfg file, but in a tiny project with no nested directories and I ended up needing to remove the directory in front of _version.py because the versioneer.py script searches from the directory that contains the setup.cfg and so it was trying to look in a directory that didn't exist.
I'm not sure how your file structure is set up but it might work if you try changing it to:
versionfile_source = _version.py
versionfile_build = _version.py
If all your source files are in the same directory as the setup.cfg then that should fix it.
Related
For one python project, I want to ship a data file with the package.
Following various (and partially contradictory) advice on the mess that is Python package data, I ended up trying different things and got it to work locally on my machine with the following setup.
My setup.cfg contains, among other things that shouldn't matter here,
[options]
include_package_data = True
and no package_data or other data related keys. My MANIFEST.in states
recursive-include lexedata clics3-network.gml.zip
My setup.py is pretty bare, essentially
from setuptools import setup
readline = "readline"
setup(extras_require={"formatguesser": [readline]})
To load the file, I use
pkg_resources.resource_stream("lexedata", "data/clics3-network.gml.zip")
I test this using tox, configured with
[tox]
isolated_build = True
envlist = general
[testenv]
passenv = CI
deps =
codecov
pytest
pytest-cov
commands =
pytest --doctest-modules --cov=lexedata {envsitepackagesdir}/lexedata
pytest --cov=lexedata --cov-append test/
codecov
On my local machine, when I run pip install ., the data file lexedata/data/clics2-network.gml.zip is properly deposited inside the site-packages/lexeadata/data directory of the corresponding virtual environment, and tox packages it inside .tox/dist/lexedata-1.0.0b3.tar.gz as well as in its venv site packages directory .tox/general/lib/python3.8/site-packages/lexedata/data/.
However, continuous integration using Github actions fails on all Python 3 versions I'm testing with
UNEXPECTED EXCEPTION: FileNotFoundError(2, 'No such file or directory')
FileNotFoundError: [Errno 2] No such file or directory: '/home/runner/work/lexedata/lexedata/.tox/general/lib/python3.10/site-packages/lexedata/data/clics3-network.gml.zip'
at the equivalent of that same tox venv path.
What could be going wrong here?
You almost did it right, try slightly update your MANIFEST.in to any of the following examples:
include src/lexedata/data/*.zip
recursive-include src/* *.zip
recursive-include **/data clics3-network.gml.zip
As you can find in docs include command defines files as paths relative to the root of the project (that's why first example starts from src folder)
recursive-include expect first argument being as dir-pattern (glob-style), so it is better include asterisks
I have a python file named file_processor.py. I would like to create an egg file out of this python file to use it in another projects. My setup.py file looks as following:
from setuptools import setup, find_packages
setup(
name = "file_processor",
version = "0.5",
packages = find_packages()
)
And I run this script with the following command:
python setup.py bdist_egg
This command generates 3 folders, namely: build, dist, file_processor.egg-info. My .egg file is located in the dist folder. However, if I change its extension from .egg to .zip to see the contents, I find only one folder which is EGG-INFO, and not the actual python file. An so, if I try to add that .egg file into my project path and import file_processor module, python throws an error that no module named file_processor found. What am I doing wrong here? Note: I got the information for generating egg files from this link
Wheel files are generally preferred over eggs these days.
Regardless, I would guess that you don't have the file_processor.py in a separate directory and you have it in the same directory as the setup.py, it needs to be in it's own directory.
You should also include a __init__.py in that directory, inside the file you can put
from .file_processor import *
This will import all the functions from your file into the package so you can use them.
This tutorial is quite good if you're looking for more information https://python-packaging.readthedocs.io/en/latest/minimal.html
After reading the python documentation, I still have trouble figuring it out.
setup.py and adding file to /bin/
And also I read that post which did not help clarify my situation
So first here's my file structure just to give you a clear notion of what I'm trying to do
setup.py
setup.cfg
MANIFEST.in
README.md
requirements.txt
graphite/
- generate_report.py
- capture.js
- templates/
- - pretty much a bunch of JS/HTML/CSS files that get used by generate_report.py
So as you can see, generate_report.py is my only python file and I'm trying to have this linked to a /bin folder when the user runs a "pip install" with my package. generate_report however requires to have(I can change this in the code) capture.js(just a phantomjs file where I use a python subprocess to run the file) and the templates folder in the same directory for it to run(I'm assuming it's best to link the file to a /bin folder which im also assuming setup.py does that for you).
Here's my setup.py code
from distutils.core import setup
setup(
name = 'graphite-analytics',
packages = ['graphite'],
version = '0.1.1',
description = 'Create a print-out template for your google analytics data',
author = 'NAME REDACTED',
author_email = 'EMAIL-REDACTED',
url = 'https://github.com/ARM-open/Graphite',
download_url = 'https://github.com/ARM-open/Graphite/archive/0.1.1.tar.gz',
keywords = ['Google analytics', 'analytics', 'templates'],
classifiers = [],
install_requires=['Click', 'google-api-python-client', 'jinja2'],
entry_points={'console_scripts': [
'graphite-analytics=graphite.generate_report:main'
]}
)
and also here's my MANIFEST.in file just in-case it's needed
recursive-include graphite/templates *
include graphite/capture.js
I understand I have console_scripts running in my setup.py, but I'm very confused on how to use it(not sure I'm even using it correctly). I've read the python documentation and my main function pretty much handles most of it.
So I renamed generate_report to graphite, and now it works!(Check my setup.py and in entry points you'll see)
I have the following directory structure :
data __ __init__.py
|__ file1
|__ file2
script
README
MANIFEST.in
__init__.py
setup.py
The Python script script uses the data files in data. I am trying to make a source tarball for this script so that it can be used systemwide.
The __init__.py file is empty. The file 'script' invokes the data files through 'data/file1' and data/file2. The contents of MANIFEST.in are:
include README script
recursive-include data *
In setup.py, amongst other things, I have :
packages = ["data"],
package_data = ["data": "*"],
scripts = ["script"]
After setting up the distribution (using sdist), I tried installing it on my system. When I try using script, it says :
Traceback (most recent call last):
File "/home/nsoum/anaconda/bin/script", line 54, in <module>
with open('data/file1', 'r') as do:
IOError: [Errno 2] No such file or directory: 'data/file1'
I guess this means that the relative paths of the data-files is not preserved. How do I work around that and make sure that my script has access to the data files ?
Thank you.
What happens is that open('data/file1', 'r') operates relative to the current working directory of the process while the path is really relative to the code file. The best way to get at data files relative to the code distribution is to use setuptools rather than bare distutils for building and distributing your package. setuptools has API for getting at a package's data resource files.
I created a private Python package that requires an XML file. When I run the package locally and on CircleCi, everything works great. Now, when I run code that installs the package as a dependency, I keep getting an error:
<urlopen error [Errno 2] No such file or directory: '/home/ubuntu/virtualenvs/venv-system/local/lib/python2.7/site-packages/...../metadata_wsdl.xml'>
Does anyone know what could be wrong? I have not been able to figure this one out.
You need to explicitly include any resources that aren't Python source code (*.py) in your setuptools distribution.
There are several ways to do this. The one I'd recommend is to use a combination of include_package_data = True in your setup() function and a MANIFEST.in file.
So assuming your distribution is layed out as my.package/my/package (i.e., with no intermediate src or lib directory), you could use something along these lines:
setup.py
from setuptools import setup, find_packages
setup(
...
packages = find_packages('my'), # include all packages under my/
include_package_data = True, # include everything in source control
# or included in MANIFEST.in
)
MANIFEST.in
recursive-include my *
recursive-include docs *
global-exclude *.pyc
global-exclude ._*
global-exclude *.mo
This would recursively include any type of file below my.package/my/ as well as my.package/docs/, and globally exclude some other types of files unwanted in a released distribution.
Please refer to Building and Distributing Packages with Setuptools ยป Including Data Files for more details on the available methods to include data files, and The MANIFEST.in template for more information about how to define your MANIFEST.
Once you've successfully included your data files in your distribution, you should make sure to use the ResourceManager API to access them from your code (as opposed to __file__ trickery or other path hacks, which won't work for certain platforms or zipped eggs).