So I created a setup.py script for my python program with distutils and I think it behaves a bit strange. First off it installs all data_files into /usr/local/my_directory by default which is a bit weird since this isn't a really common place to store data, is it?
I changed the path to /usr/share/my_directory/. But now I'm not able to write to the database inside that directory and I can't set required permission from within setup.py neither since the actual database file has not been created when I run it.
Is my approach wrong? Should I use another tool for distributing?
Because at least for Linux, writing a simple setup sh script seems easier to me at the moment.
The immediate solution is to invoke setup.py with --prefix=/the/path/you/want.
A better approach would be to include the data as package_data. This way they will be installed along side your python package and you'll find it much easier to manage it (find paths etc).
Related
I've made this question because I had to go through the whole process of creating my own application using Apple's somewhat lacking documentation, and without the use of py2app. I wanted to create the whole application structure so I know exactly what was inside, as well as create an installer for it. The latter of these is still a mystery, so any additional answers with information on making a custom installer would be appreciated. As far as the actual "bundle" structure goes, however, I think I've managed to get the basics down. See the answer below.
Edit: A tutorial has been linked at the end of this answer on using PyInstaller; I don't know how much it helps as I haven't used it yet, but I have yet to figure out how to make a standalone Python application without the use of a tool like this and it may just be what you're looking for if you wish to distribute your application without relying on users knowing how to navigate their Python installations.
A generic application is really just a directory with a .app extension. So, in order to build your application, just make the folder without the extension first. You can rename it later when you're finished putting it all together. Inside this main folder will be a Contents folder, which will hold everything your application needs. Finally, inside Contents, you will place a few things:
Info.plist
MacOS
Resources
Frameworks
Here you can find some information on how to write your Info.plist file. Basically, this is where you detail information about your application.
Inside the MacOS you want to place your main executable. I'm not sure that it matters how you write it; at first, I just had a shell script that called python3 ./../Resources/MyApp.py. I didn't think this was very neat though, so eventually I called the GUI from a Python script which became my executable (I used Tkinter to build my application's GUI, and I wrote several modules which I will get to later). So now, my executable was a Python script with a shebang pointing to the Python framework in my application's Frameworks folder, and this script just created an instance of my custom Tk() subclass and ran the mainloop. Both methods worked, though, so unless someone points out a reason to choose one method over the other, feel free to pick. The one thing that I believe is necessary, is that you name your executable the SAME as your application (before adding the .app). That, I believe, is the only way that MacOS knows to use that file as your application's executable. Here is a source that describes the bundle structure in more detail; it's not a necessary read unless you really want to get into it.
In order to make your executable run smoothly, you want to make sure you know where your Python installation is. If you're like me, the first thing you tried doing on your new Mac was open up Terminal and type in python3. If this is the case, this prompted you to install the Xcode Command Line tools, which include an installation of Python 3.8.2 (most recent on Xcode 12). Then, this Python installation would be located at /usr/bin/python3, although it's actually using the Python framework located at
/Applications/Xcode.app/Developer/Library/Frameworks/Python3.framework/Versions/3.8/bin/python3
I believe, but am NOT CERTAIN, that you could simply make a copy of this framework and add it to your Frameworks folder in order to make the app portable. Make a copy of the Python3.framework folder, and add it to your app's Frameworks folder. A quick side note to be wary of; Xcode comes packaged with a lot of useful tools. In my current progress, the tool I am most hurting for is the Fortran compiler (that I believe comes as a part of GCC), which comes with Xcode. I need this to build SciPy with pip install scipy. I'm sure this is not the only package that would require tools that Xcode provides, but SciPy is a pretty popular package and I am currently facing this limitation. I think by copying the Python framework you still lose some of the symlinks that point to Xcode tools, so any additional input on this would be great.
In any case, locate the Python framework that you use to develop your programs, and copy it into the Frameworks folder.
Finally, the Resources folder. Here, place any modules that you wrote for your Python app. You also want to put your application's icon file here. Just make sure you indicate the name of the icon file, with extension, in the Info.plist file. Also, make sure that your executable knows how to access any modules you place in here. You can achieve this with
import os
os.chdir('./../Resources')
import MyModules
Finally, make sure that any dependencies your application requires are located in the Python framework site-packages. These will be located in Frameworks/Python3.framework/Versions/3.X.Y/lib/python3.x.y/site-packages/. If you call this specific installation of Python from the command line, you can use path/to/application/python3 -m pip install package and it should place the packages in the correct folder.
P.S. As far as building the installer for this application, there are a few more steps needed before your application is readily downloaded. For instance, I believe you need to use the codesign tool in order to approve your application for MacOS Gatekeeper. This requires having a developer license and manipulating certificates, which I'm not familiar with. You can still distribute the app, but anyone who downloads it will have to bypass the security features manually and it will seem a bit sketchy. If you're ready to build the installer (.pkg) file, take a look at the docs for productbuild; I used it and it works, but I don't yet know how to create custom steps and descriptions in the installer.
Additional resources:
A somewhat more detailed guide to the anatomy of a macOS app
A guide I found, but didn't use, on using codesign to get your app past Gatekeeper
A RealPython tutorial I found on using PyInstaller to build Python-based applications for all platforms
I have a question about adding project path to python, for facilitating import effort.
Situation
When I write code in Python, I usually add necessary path to sys.path by using
import sys
sys.path.append("/path/to/dir/") # almost every `.py` need this
Sometimes, when my project gets bigger with many levels of directories, this approach seems bulky and error-prone (especially when I re-organize my files)
Recently, I start using a bash script (located at project root directory) that adding the sys.path.append with project root argument to .py file in the project. With this approach, I hardly have to manually care about importing a module.
Question
My question is: Is that a good practice? I find it convenient for myself, compared to my old method, but since the bash script is a separated file, I need 2 command to run any script in my project (one for the bash and one for the .py). I can include the command calling .py to the bash, but it far less flexible than directly call it from terminal.
Really want to hear some advices! Thanks in advance. Any suggestion will be gratefully appreciated!
It is generally not good practice to use manipulate sys.path within a python library or program. You should add the relevant paths to the PYTHONPATH in the calling environment for your python program:
PYTHONPATH="/path/to/other/projects/directory:$PYTHONPATH" python ...
or
export PYTHONPATH="/path/to/other/projects/directory:$PYTHONPATH"
python ...
This allows you to easily manipulate the paths that your program or library will search for dependencies easily without modifying your code.
It is also very easy to manage this in your personal development environment by modifying your bashrc or in your production environments in your init script (or other wrapper script) and provides you with one place to update each time you add or modify your project paths.
Given that you mention that you have almost one directory per .py file, you should also consider how your code might be organized into packages to further simplify your setup.
It's not a particularly good practice, though you could get away with it. Better to look into virtualenv though (or pipenv) for a smoother workflow.
Imaging that we are given a finished C++ source code of a library, called MyAwesomeLib. The goal is to expose some of its power to python, so we create a wrapper using swig and generated a python package called PyMyAwesomeLib.
The directory structure now looks like
root_dir
|-src/
|-lib/
| |- libMyAwesomeLib.so
| |- _PyMyAwesomeLib.so
|-swig/
| |- PyMyAwesomeLib.py
|-python/
|- Script_using_myawesomelib.py
So far so good. Ideally, all we want to do next is to copy lib/*.so swig/*.py and python/*.py into the corresponding directory in site-packages in a pythonic way, i.e. using
python setup.py install
However, I got very confused when trying to achieve this simple goal using setuptools and distutils. Both tools handles the compilation of python extensions through an internal system, where the source file, compiler flags etc. are passed using setup(ext_module=[Extension(...)]). But this is ridiculous since MyAsesomeLib has a fully functioning build system that is based on makefile. Porting the logic embedded in makefiles would be redundant and completely un-necessary work.
After some research, it seems there are two options left, I can either override setuptools.command.build and setuptools.command.install to use the existing makefile and copy the results directly, or I can somehow let setuptools know about these files and ask it to copy them during installation. The second way is more appealing, but it is what gives me the most headache. I have tried the following optionts without success
package_data, and include_package_data does not work because *.so files are not under version control and they are not inside of any package.
data_files does not seems to work since the files only get included when running python setup.py sdist, but ignored when python setup.py install. This is the opposite of what I want. The .so files should not be included in the source distribution, but get copied during the installation step.
MANIFEST.in failed for the same reason as data_files.
eager_resources does not work either, but honestly I do not know the difference between eager_resources and data_files or MANIFEST.in.
I think this is actually a common situation, and I hope there is a simple solution to it. Any help would be greatly appreciated.
Porting the logic embedded in makefiles would be redundant and
completely un-necessary work.
Unfortunately, that's exactly what I had to do. I've been struggling with this same issue for a while now.
Porting it over actually wasn't too bad. distutils does understand SWIG extensions, but it this was implemented rather haphazardly on their part. Running SWIG creates Python files, and the current build order assumes that all Python files have been accounted for before running build_ext. That one wasn't too hard to fix, but it's annoying that they would claim to support SWIG without mentioning this. Distutils attempts to be cross-platform when compiling things, so there is still an advantage to using it.
If you don't want to port your entire build system over, use the system's package manager. Many complex libraries do this (but they also try their best with setup.py). For example, to get numpy and lxml on Ubuntu you'd just do:
sudo apt-get install python-numpy python-lxml. No pip.
I realize you'd rather write one setup file instead of dealing with every package manager ever so this is probably not very helpful.
If you do try to go the setuptools route there is one fatal flaw I ran into: dependencies.
For instance, if you are distributing a SWIG-based project, it's going to need libpython. If they don't have it, an error like this happens:
#include <Python.h>
error: File not found
That's pretty unhelpful to the average user.
Even worse, if you require a shared library but the user's library is out of date, the user can get some crazy errors. You're at the mercy of their C++ compiler to output Google-friendly error messages so they can figure it out.
The long-term solution would be to get setuptools/distutils to get better at detecting non-python libraries, hopefully as good as Ruby's gem. I pretty much had to roll my own. For instance, in this setup.py I'm working on you can see a few functions at the top I hacked together for dependency detection (still doesn't work on all systems...definitely not Windows).
I have a Python program that uses the following libraries: Tkinter, NumPy, SciPy, SymPy, and Matplotlib. And probably it will include more libraries in the near future while being developed.
How can I distribute such a program to Mac, Windows, and Linux users, without requiring users to install the right version of each library, and hopefully by just downloading a single file and executing it?
What I initially wanted was compiling the program into a static binary program, but it seems that it's not an easy goal.
Probably I can ask users to install Python, but that's the maximum requirement that I can ask for them, and I want to avoid it if possible.
I don't care about hiding the code at all; in the end I will distribute both the code and the program. What I want is to make the program very easy for any user to download and run it.
Even such an advice as 'a Python program is not suitable for such a distribution' is welcome. I had a fair amount of experience with distributing C programs but I don't know what to expect with a Python program.
For convenient, you could try something like pyinstaller.
It will package all of needed module into one folder or or one executable as you like. And it can run in all platforms.
The simple command to make a directory contains an executable file and all needed library is
$pyinstaller --onedir --name=directory_name --distpath="path_to_put_that_directory" "path to your main_program.py"
You can change --onedir into --onefile to make that folder into an one executable file which has all the thing it need to run inside.
You can use Setuptools to do the packaging stuff .
It create eggs, which are the equivalent of jars.
https://pypi.python.org/pypi/setuptools#using-setuptools-and-easyinstall
https://pythonhosted.org/setuptools/setuptools.html
You can have a look at Py2exe , even though you risk the application becoming bigger than it already is, and some packages need to be installed manually .
I'm in the middle of reworking our build scripts to be based upon the wonderful Waf tool (I did use SCons for ages but its just way too slow).
Anyway, I've hit the following situation and I cannot find a resolution to it:
I have a product that depends on a number of previously built egg files.
I'm trying to package the product using PyInstaller as part of the build process.
I build the dependencies first.
Next I want to run PyInstaller to package the product that depends on the eggs I built. I need PyInstaller to be able to load those egg files as part of it's packaging process.
This sounds easy: you work out what PYTHONPATH should be, construct a copy of sys.environ setting the variable up correctly, and then invoke the PyInstaller script using subprocess.Popen passing the previously configured environment as the env argument.
The problem is that setting PYTHONPATH alone does not seem to be enough if the eggs you are adding are extension modules that are packaged as zipsafe. In this case, it turns out that the embedded libraries are not able to be imported.
If I unzip the eggs (renaming the directories to .egg), I can import them with no further settings but this is not what I want in this case.
I can also get the eggs to import from a subshell by doing the following:
Setting PYTHONPATH to the directory that contains the egg you want to import (not the path of the egg itself)
Loading a python shell and using pkg_resources.require to locate the egg.
Once this has been done, the egg loads as normal. Again, this is not practical because I need to be able to run my python shell in a manner where it is ready to import these eggs from the off.
The dirty option would be to output a wrapper script that took the above actions before calling the real target script but this seems like the wrong thing to do: there must be a better way to do this.
Heh, I think this was my bad. The issue appear to have been that the zipsafe flag in setup.py for the extension package was set to False, which appears to affect your ability to treat it as such at all.
Now that I've set that to True I can import the egg files, simply by adding each one to the PYTHONPATH.
I hope someone else finds this answer useful one day!
Although you have a solution, you could always try "virtualenv" that creates a virtual environment of python where you can install and test Python Packages without messing with the core system python:
http://pypi.python.org/pypi/virtualenv