I am trying to build a Python multi-file code with PyInstaller. For that I have compiled the code with Cython, and am using .so files generated in place of .py files.
Assuming the 1st file is main.py and the imported ones are file_a.py and file_b.py, I get file_a.so and file_b.so after Cython compilation.
When I put main.py, file_a.so and file_b.so in a folder and run it by "python main.py", it works.
But when I build it with PyInstaller and try to run the executable generated, it throws errors for imports done in file_a and file_b.
How can this be fixed? One solution is to import all standard modules in main.py and this works. But if I do not wish to change my code, what can be the solution?
So I got this to work for you.
Please have a look at Bundling Cython extensions with Pyinstaller
Quick Start:
git clone https://github.com/prologic/pyinstaller-cython-bundling.git
cd pyinstaller-cython-bundling
./dist/build.sh
This produces a static binary:
$ du -h dist/hello
4.2M dist/hello
$ ldd dist/hello
not a dynamic executable
And produces the output:
$ ./dist/hello
Hello World!
FooBar
Basically this came down to producing a simple setup.py that builds the extensions file_a.so and file_b.so and then uses pyinstaller to bundle the application the extensions into a single executeble.
Example setup.py:
from glob import glob
from setuptools import setup
from Cython.Build import cythonize
setup(
name="test",
scripts=glob("bin/*"),
ext_modules=cythonize("lib/*.pyx")
)
Building the extensions:
$ python setup.py develop
Bundling the application:
$ pyinstaller -r file_a.so,dll,file_a.so -r file_b.so,dll,file_b.so -F ./bin/hello
Just in case someone's looking for a quick fix.
I ran into the same situation and found a quick/dirty way to do the job. The issue is that pyinstaller is not adding the necessary libraries in the .exe file that are needed to run your program.
All you need to do is import all the libraries (and the .so files) needed into your main.py file (the file which calls file_a.py and file_b.py). For example, assume that file_a.py uses opencv library (cv2) and file_b.py uses matplotlib library. Now in your main.py file you need to import cv2 and matplotlib as well. Basically, whatever you import in file_a.py and file_b.py, you have to import that in main.py as well. This tells pyinstaller that the program needed these libraries and it includes those libraries in the exe file.
Related
As the title suggests, I'm trying to make a python script accessible from the command line. I've found libraries like click and argv that make it easy to access arguments passed from the command line, but the user still has to run the script through Python.
Instead of
python /location/to/myscript.py
I want to be able to just do
myscript
from any directory
From what I understand, I can achieve this on my computer by editing my PATH variables. However, I would like to be able to simply do:
pip install myscript
and then access the script by typing myscript from anywhere. Is there some special code I would put in the setup.py?
Use console_scripts to hook to a specific Python method (not a whole executable), setup.py file:
from setuptools import setup
setup(
...
entry_points = {
'console_scripts': ['mybinary=mymodule.command_line:cli'],
},
name='mymodule',
...
)
the command_line.py script would be:
import mymodule
def cli():
print("Hello world!")
and the project directory would look like this:
myproject/
mymodule/
__init__.py
command_line.py
...
setup.py
Setuptools will generate a standalone script ‘shim’ which imports your module and calls the registered function.
That shim allows you to call mybinary directly and ensures it's correctly invoked by python. It provides platform-specific shims (i.e., on Windows it generates a .exe).
See packaging documentation for more details.
You can do this with setuptools
an example of a nice setup.py (say your package requires pandas and numpy):
import setuptools
setuptools.setup(
name='myscript',
version='1.0',
scripts=['./scripts/myscript'],
author='Me',
description='This runs my script which is great.',
packages=['lib.myscript'],
install_requires=[
'setuptools',
'pandas >= 0.22.0',
'numpy >= 1.16.0'
],
python_requires='>=3.5'
)
Your directory should be setup as follows:
[dkennetz package]$ ls
lib scripts setup.py
inside lib would be:
[dkennetz package]$ ls lib
myscript
inside of myscript would be:
[dkennetz package]$ ls lib/myscript
__main__.py
__init__.py
helper_module1.py
helper_module2.py
main would be used to call your function and do whatever you want to do.
inside scripts would be:
[dkennetz package]$ ls scripts
myscript
and the contents of myscript would be:
#!/usr/bin/env bash
if [[ ! $# ]]; then
python3 -m myscript -h
else
python3 -m myscript $#
fi
then to run you do: python setup.py install
which will install your program and all of the dependencies you included in install_requires=[] in your setup.py and install myscript as a command-line module:
[dkennetz ~]$ myscript
Assuming you are in the bash shell and python 3 is installed and you want to be able to do what you are requesting, you will need to append the path of the script file to your PATH variable in your .bash_profile file in your home directory. Also, in your python script file, you need to have something similar to the following as the first line of the script:
#!/usr/bin/env python3
Additionally, you can remove the extension (.py) from the script file, such that, as in my example above, the filename is a script in contrast to script.py.
You will also need to set the permission of your filename to
chmod 755 filename
If you want the script to be accessible system-wide, you will need to modify /etc/profile and add to the bottom of the file:
export PATH=$PATH:/path/to/script
Alternatively, if you move the python script file to /usr/local/bin, it may not be necessary to make any profile changes as this directory is often already in the PATH.
To see the value of PATH issue the following command at the shell
echo $PATH
I know this question is older and for a project using setuptools definitely use Tombart's answer
That said, I have been using poetry and that uses a .toml file and if that's what you use, and since this is likely what others will search for here's how you package a script with a toml file (at least with poetry)
[project.scripts]
myscript = "mybinary=mymodule.command_line:cli"
Not sure if this works for flit or any other package managers but it works for poetry
I have a file structure like this:
project_folder/
notebooks/
notebook01.ipynb
notebook02.ipynb
...
notebookXY.ipynb
module01.py
module02.py
module03.py
In .ipynb files inside notebook/ folder I want to import classes and functions from module01.py, module02.py and module03.py.
I have found answer in this question that it is possible using following lines of code inside every notebook and run those lines as first cell every time:
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
Is there please a better way for this? What if I have A LOT of .ipynb files inside notebooks/ folder, do I have to paste those lines of code at the beginning of every single one? Is there a better, more minimalist or cleaner way?
Try adding the project_folder to your PYTHONPATH environment variable. This will allow you to tell python to search that directory for imports.
You would do this in your user profile settings, or in your startup script - not in python. It's something that has to be set before python ever gets run.
Another solution is to move all your Python modules (.py files) into a folder and make them an installable package. If you pip install it into your current environment, you can then import the package into any notebook in that environment, regardless of folder structure.
So in your situation you could have:
project_folder/
notebooks/
notebook01.ipynb
notebook02.ipynb
...
notebookXY.ipynb
my_package/
__init__.py
module01.py
module02.py
module03.py
setup.py
__init__.py can just be an empty file, and tells Python "everything in this folder is part of a package"
For an explanation of what goes in setup.py see here.
A basic setup.py can be as simple as this:
import setuptools
setuptools.setup(
name="my_package",
version="0.0.1",
description="A small example package",
packages=setuptools.find_packages(),
python_requires='>=3.7',
)
Install it:
cd project_folder
pip install [-e] .
Including the optional -e flag will install my_package in "editable" mode, meaning that instead of copying the files into your virtual environment, a symlink will be created to the files where they are.
Now in any notebook you can do:
import my_package
Or
from my_package.module01 import <some object>
I'm currently trying to bundle my Python (3.4.4) application with PyInstaller. I'm following the PyInstaller documentation about how to build a .spec file.
This is how my project file tree looks like:
project/
__init__.py
main.py
ui.py
Lib/
__init__.py
History.py
Command.py
Graphics.py
Tools.py
According to the documentation, all I have to do is to run the pyi-makespec main.py function and the Lib module will be detected as long as it is imported from the main.py file.
Documentation:
Because your script includes the statement import helpmod, PyInstaller will create this folder arrangement in your bundled app
https://pythonhosted.org/PyInstaller/spec-files.html#using-data-files-from-a-module
This is how the beginning of the my main.py looks like
# main.py
from Lib.Command import Command
from Lib.History import History
from Lib.Graphics import Graphics
import Lib.Tools as Tools
When I try to run the app afterwards, all I get is this error. I probably missed something, does someone have an idea of the problem ? :D
error log : http://pastebin.com/3dygTqfn
EDIT : Just figured out that the problem comes from my Graphics.py library, which import some bokeh tools I use to generate histograms. Still don't know why the bokeh import makes the whole thing crash, it works perfectly fine when I run it with a python interpreter...
I have a script I'm trying to compile with PyInstaller (2.1) using Python 2.7
The script uses a custom package I've written called 'auto_common'
In the script I import it by using
sys.path.append(path_to_package)
The project folders look like this:
Automation/ Top level project
Proj1/
script1.py This is the script I want to compile
myspec.spec Spec file for the script
Packages/
auto_common/
__init__.py Init module of the package (empty)
... More modules here
In the PyInstaller log file I get the following warning:
W: no module named auto_common (top-level import by __main__)
How do I create a hook which will include the package (using sys.path.append for example)?
I tried adding the path of the package to 'pathex' in the spec file but it didn't work.
Using "-p" when compiling (or when building a spec file) will add additional paths to python's path.
pyinstaller -p any_path/Automation/Packages script1.py
This mimics the behavior of sys.path.append().
Thanks to the guys at PyInstaller for the solution:
sys.path.append does not work when compiling with PyInstaller 2.1
I tried using py2app, but I can't figure out where to put the filename of the file I want to make the standalone for. What command do I need to run? I'm extremely confused...
In your setup.py, you want to do something like this:
from distutils.core import setup
import py2app
setup(name="App name",
version="App version",
options=opts, # see the py2app docs, could be a lot of things
app=[main.py], # This is the standalone script
)
See docs - you pass name of your file to py2applet script.
$ py2applet --make-setup MyApplication.py
Wrote setup.py
$ python setup.py py2app -A
And IMHO - pyInstaller - is the best tool for python binaries building.