I created a python package looking like the following. The package is primarily used to run stages in a jenkins pipeline inside a docker container. So I created a repository in github and created a dockerfile with a step where the repository is cloned and performed pip install on that package. Then I built the docker image.
jenkins_pipeline_pkg/
| - jenkins_pipeline_pkg/
| - __init__.py
| - config/
| - config.yaml
| - scripts/
| - pre_build.py
| - build.py
| - setup.py
I performed pip install on the package inside the docker container I created using the dockerfile. The setup.py looks like the following.
#!/usr/bin/env python
from setuptools import setup
setup(name='jenkins_pipeline_pkg',
version='0.1',
description='Scripts for jenkins pipeline',
url='<private repo url>',
author='<name>',
author_email='<email>',
packages=['jenkins_pipeline_pkg'],
zip_safe=False,
entry_points={
'console_scripts': [
'pre-build = jenkins_pipeline_pkg.pre_build:main',
'build = jenkins_pipeline_pkg.build:main',],
}
)
I ran pip install on the package. It installed the executable mentioned in the entry_points in ~/.local/bin. Then I tried to execute the executable from anywhere else by not changing into the directory ~/.local/bin (just say I executed from /home/user). And also bash auto complete doesnt show the pre-build command. I dont know what I'm missing here.
Try either creating link for executable in /use/bin or include ~/.local/bin in $PATH.
Edit:
export PATH=~/.local/bin:$PATH
Related
I wrote a command-line app using python.
the problem is I want the to user can use the command globally after installed the command-line .
I wrote the command-line, I published the package but I don't know how to make this package globally available for users as system commands.
Example :
pip install forosi
and after that user can globally run this command from everywhere they want . like :
forosi help
I'm going to assume you have the main file you are supposed to run in src/forosi.py in your package directory, but you should be able to adapt this if it's different.
First, you want to rename the script to forosi, without the .py extension.
Second, at the top of the file (now called forosi) add the following:
#!/usr/bin/env python3
... rest of file...
In your setup.py for the package, you need to use the scripts option.
setuptools.setup(
...
scripts=['src/forosi'],
...
)
This is the method that required minimal refactoring of your code. If you happen to have a main() function in one of your python files which is the entrypoint of the script, you can just add the following into your setup.py instead of the above:
setup(
...
entry_points = {
'console_scripts': ['src.forosi:main'],
}
...
)
In either case, to build the package locally, run
python3 setup.py bdist_wheel
This will create a wheel file in the dist/ directory called package_name-version-<info>-.whl. This is the official distribution for pypi packages.
To install this package, run:
pip3 install dist/package_name-version-<info>-.whl
or if you only have one version in the dist folder, just
pip3 install dist/*
I am trying to create a Repl.it on my Python project, and when I run, it fails at not finding [tool.poetry] section. And yes my project has a pyproject.toml file.
Repl.it: Updating package configuration
--> /usr/local/bin/python3 -m poetry add halo vistir distlib click packaging tomlkit pip-shims pythonfinder python-cfonts appdirs
[RuntimeError]
[tool.poetry] section not found in pyproject.toml
add [-D|--dev] [--git GIT] [--path PATH] [-E|--extras EXTRAS] [--optional] [--python PYTHON] [--platform PLATFORM] [--allow-prereleases] [--dry-run] [--] <name> (<name>)...
exit status 1
Repl.it: Package operation failed.
The question is, how can I know what is happening in the initializing stage, how does it know what dependencies to install and how can I change the behavior? You can try this repo: github/frostming/pdm for reproduction.
After importing project you could specify run button behaviour with bash command:
This will be saved to .replit. You could write tings like pip3 install -r requirements.txt && python3 main.py. Read more about available settings in .replit docs
Also there is another doc about dependencies with following quote:
In a pyproject.toml file, you list your packages along with other
details about your project. For example, consider the following
snippet from pyproject.toml:
...
[tool.poetry.dependencies]
python = "^3.8"
flask = "^1.1"
...
I have a Python project where I am using the maskrcnn_benchmark project from facebook research.
In my continuous integration script, I create a virtual environment where I install this project with thee following steps:
- git clone https://github.com/facebookresearch/maskrcnn-benchmark.git
- cd maskrcnn-benchmark
- git reset --hard 5ec0b91cc85163ac3b58265b3f9b39bb327d0ba6
- python setup.py build develop
This works fine and installs everything in the virtual environment as it needs to be.
Now I have a setup.py for my project for packaging and deploying my app. How can I do the same in this setup.py file i.e. pull and build this repository from the particular commit hash?
Thanks to the answer below and the comments, I have the setup.py as follows now:
install_requires=[
'5ec0b91cc85163ac3b58265b3f9b39bb327d0ba6-0.1',
'ninja',
'yacs',
'matplotlib',
'cython==0.28.5',
'pymongo==3.7.1',
'scipy==1.1.0',
'torch==1.0.0',
'torchvision==0.2.1',
'opencv_python==3.4.2.17',
'numpy==1.15.1',
'gputil==1.3.0',
'scikit_learn==0.19.2',
'scikit_image==0.14.0',
'sk_video==1.1.10'
],
dependency_links=[
'http://github.com/facebookresearch/maskrcnn-benchmark/tarball/master#egg=5ec0b91cc85163ac3b58265b3f9b39bb327d0ba6-0.1'
],
No matter where I put the '5ec0b91cc85163ac3b58265b3f9b39bb327d0ba6-0.1', the maskrcnn-benchmark project gets compiled first. How can I do it that the dependency and this package is installed last?
You can use dependency_links setup.py
i.e.
dependency_links =[https://github.com/GovindParashar136/spring-boot-web-jsp/tarball/master#egg=8138cc3fd4e11bde31e9343c16c60ea539f687d9]
In your case url
https://github.com/facebookresearch/maskrcnn-benchmark/tarball/master#egg=5ec0b91cc85163ac3b58265b3f9b39bb327d0ba6
This answer suggests that including a package# prefix to the git url will install the git commit specified:
# in setup.py
setup(
# other fields
install_requires=[
"packagename#git+https://github.com/<user>/<repo>#<commit hash>",
],
)
so in your case:
# in setup.py
setup(
# other fields
install_requires=[
"maskrcnn_benchmark#git+https://github.com/facebookresearch/maskrcnn-benchmark.git#5ec0b91cc85163ac3b58265b3f9b39bb327d0ba6",
],
)
Folks,
After building and deploying a package called myShtuff to a local pypicloud server, I am able to install it into a separate virtual env.
Everything seems to work, except for the path of the executable...
(venv)[ec2-user#ip-10-0-1-118 ~]$ pip freeze
Fabric==1.10.1
boto==2.38.0
myShtuff==0.1
ecdsa==0.13
paramiko==1.15.2
pycrypto==2.6.1
wsgiref==0.1.2
If I try running the script directly, I get:
(venv)[ec2-user#ip-10-0-1-118 ~]$ myShtuff
-bash: myShtuff: command not found
However, I can run it via:
(venv)[ec2-user#ip-10-0-1-118 ~]$ python /home/ec2-user/venv/lib/python2.7/site-packages/myShtuff/myShtuff.py
..works
Am I making a mistake when building the package? Somewhere in setup.cfg or setup.py?
Thanks!!!
You need a __main__.py in your package, and an entry point defined in setup.py.
See here and here but in short, your __main__.py runs whatever your main functionality is when running your module using python -m, and setuptools can make whatever arbitrary functions you want to run as scripts. You can do either or both. Your __main__.py looks like:
from .stuff import my_main_func
if __name__ == "__main__":
my_main_func()
and in setup.py:
entry_points={
'console_scripts': [
'myShtuffscript = myShtuff.stuff:my_main_func'
]
Here, myShtuffscript is whatever you want the executable to be called, myShtuff the name of your package, stuff the name of file in the package (myShtuff/stuff.py), and my_main_func the name of a function in that file.
You need to define entry_point in your setup.py in order to directly execute something from the command line:
entry_points={
'console_scripts': [
'cursive = cursive.tools.cmd:cursive_command',
],
},
More details can be found here.
I have a medium sized python command line program that runns well from my source code, and I've created a source distribution file and installed it into the virtual environment using "python setup.py install"
Since this is a pure Python program, and provided that the end users have installed Python, and the required packages, my idea is that i can distribute it through PyPi for all available platforms as a source distribution.
Upon install, I get an 'appname' directory within the virtualenv site-packages directory, and it also runs correctly when I write "python 'pathtovirtualenv'/Lib/sitepackages/'myappname'
But is this the way the end user is supposed to run distutils-distributed programs from the command line.
I fnd a lot of information on how to distribute a program using distutils, but not on how the end user is supposed to launch it after installing it.
Since you already created a setup.py, I would recommend looking at the entry_points:
entry_points={
'console_scripts': [
'scriptname=yourpackage.module:function',
],
},
Here, you have a package named yourpackage, and a module named module in it, and you refer to the function function. This function will wrapped by the script called scriptname, which will be installed in the users bin folder, which is normally in the $PATH, so the user can simply type scriptname after he installed your package via pip install.
To sum up: a user will install the package via pip install yourpackage and finally be able to call the function in module via script name.
Here are some docs on this topic:
https://pythonhosted.org/setuptools/setuptools.html#automatic-script-creation
http://www.scotttorborg.com/python-packaging/command-line-scripts.html
Well, I eventually figured it out.
Initially, I wanted to just use distutils, I like it when the end user can install it with minimum of extra dependencies. But I have now discovered that setuptools is the better option in my case.
My directory structure looks like this (Subversion):
trunk
|-- appname
| |-- __init__.py # an empty file
| |-- __main__.py # calls appname.main()
| |-- appname.py # contains a main() and imports moduleN
| |-- module1.py
| |-- module2.py
| |-- ...
|-- docs
| |-- README
| |-- LICENSE
| |-- ...
|-- setup.py
And my setyp.py basically looks like this:
# This setup file is to be used with setuptools source distribution
# Run "python setup sdist to deploy
from setuptools import setup, find_packages
setup( name = "appname",
...
include_package_data = True,
packages = find_packages(),
zip_safe = True,
entry_points = {
'console_scripts' : 'appname=appname.appname:main'
}
)
The next step now is to figure out how to install the contents of the docs directory on the users computer.
But right now, I'm thinking about adding --readme, --license, --changes, --sample (and so forth) options to the main script, to display them at run time.