Is it possible to create "uninstall hooks" in setup.py files using setuptools.setup()?
I have an issue that my package needs to store some configuration files on the computer. The issue is when user uninstalls this package, the configuration files will stay. How can I detect when user uninstalls my package?
My guesses how to tackle this problem are:
a) Use some functionality in setuptools.setup() to create such a hook. I couldn't find any information about it existing, but even if it would exist, manual removal of files from site-packages directory probably wouldn't be detected.
b) Create a daemon that starts when machine boots up and check once per interval whenever package still exists, otherwise remove config files with the daemon. This approach could work, but it is complicated, system dependent and error prone while I want a simple solution.
Related
I would like to keep multiple requirements*.txt files up to date while working on a project. Some packages my project depends on are required at runtime, while others are required during development only. Since these packages may have their own dependencies as well, it is hard to tell which dependency should be in which requirements*.txt file.
If I would like to keep track of the runtime dependencies in requirements_prod.txt and of the development dependencies in requirements_dev.txt, how should I keep both files up to date and clean if I add packages during development? Running a mere pip freeze > requirements_prod.txt would list all installed dependencies, including those only needed for development. This would pollute either of the requirements_*.txt files.
Ideally, I would like to mark a package on installation as 'development' or 'runtime' and have it (and its own dependencies) written to the correct requirements_*.txt.
Edit:
#Brian: My question is slightly different from this question because I would like to have my requirements_*.txt files to stay side by side in the same branch, not in different branches. So my requirements_*.txt should always be in the same commits.
Brian's answer clarifies things a lot for me:
Usually you only want to add direct dependencies to your requirements file.
(...) Both of those files should be maintained manually
So instead of generating the requirements_*.txt file automatically using pip freeze, they should be maintained manually and only need to contain direct dependencies.
I am writing a new Python application that I intend to distribute to several colleagues. Instead of my normal carefree attitude of just having everything self contained and run inside a folder in my home directory, this time I would like to broaden my horizon and actually try to utilize the Linux directory structure as it was intended (at least somewhat). Can you please read my breakdown below and comment and or make recommendations if this is not correct.
Lets call the application "narf"
/usr/narf - Install location for the actual python file(s).
/usr/bin/narf - Either a softlink to the main python file above or use this location instead.
/etc/narf - Any configuration files for app narf.
/var/log/narf - Any log files for app narf.
/usr/lib - Any required libraries for app narf.
/run/narf - Any persistent (across reboot), but still temp files for app narf.
/tmp/narf - Very temp files for app narf that go away with reboot
I assume I should stick to using /usr/X (for example /usr/bin instead of just /bin) since my application is not system critical and a mere addon.
I currently use Ubuntu 16 LTS, however part of this is intended as a way to try to standardize my app for any popular Linux distro.
Thanks for the help.
* UPDATE *
I think I see the answer to at least part of my question. Looking in /usr, I now see that it is a pretty barebones directory and almost akin to user level root directory (ie has bin, lib, local, sbin, etc. but thats pretty much all). This leads me to believe my application should absolutely NOT live in /usr, and ONLY in /usr/bin.
You'd be better off putting your entire application into /opt. See here: http://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/Linux-Filesystem-Hierarchy.html#opt
Then put a soft link to the executable into /usr/local/bin. see here: https://unix.stackexchange.com/a/8658/219043
I wouldn't worry about the rest.
Your application should not live in the /usr/ directory. If you want to package your application into a distribution, please refer to these guides:
Packaging and Distributing Projects
How To Package And Distribute Python Applications
You can for sure write to unix directories within your application when appropriate, but keep in mind there are mechanisms built into setup.py that help with the installation side of this (for example).
If this is something private, I'd suggest making this a private repository on GitHub and have your colleagues install it through pip.
My project has a global asset directory (/usr/share/openage) that contains various files (graphics, texts, ...) and an user-specific asset directory (~/.openage) that allows the user to overwrite some of the global assets/add their own.
It is my understanding that when building, you pass the install prefix to the build system (e.g. ./configure --install-prefix=/usr), which will in turn generate a file (e.g. configure.h) which makes the install prefix available for the code (e.g. #define INSTALL_PREFIX="/usr"). The code will then look for its assets in INSTALL_PREFIX "share/openage". So far, so good.
However, when the project hasn't been installed yet (which is true in 99.9% of cases for me as the developer), the directory /usr/share/openage obviously doesn't exist yet; instead, I would want to use ./assets in the current directory. Even worse, if the installed directory exists (e.g. from an independent, earlier install), it might data incompatible to the current dev version.
Similarily, if running the installed project, I'd want it to use the user's home directory (~/.openage) as user asset directory, while in "devmode", it should use a directory like "./userassets".
It gets even worse when thinking about non-POSIX platforms. On Windows, INSTALL_PREFIX is useless since programs can be installed basically anywhere (do programs simply use the current working directory or as the asset directory?), and I don't have the slightest idea how Mac handles this.
So my question is: Is there a generally accepted "best way to do this"? Surely, hundreds of projects (basically every single project that has an asset directory) have dealt with this problem one way or on other.
Unfortunately, I don't even know what to google for. I don't even know how to tag this question.
Current ideas (and associated issues) include:
Looking for a file openage_version, which only exists in the source directory, in cwd. If it exists, assume that the project is currently uninstalled.
Issue: Even in "development mode", cwd might not always be the project root directory.
Checking whether readlink("/proc/self/exe") starts with INSTALL_PREFIX
Issue: platform-specific
Issue: theoretically, the project root directory could be in /usr/myweirdhomedirectory/git/openage
Forcing developers to specify an argument --not-installed, or to set an environment variable, OPENAGE_INSTALLED=0
Issue: Inconvenient
Issue: Forgetting to specify the argument will lead to confusion when the wrong asset directory is used
During development, call ./configure with a different INSTALL_PREFIX
Issue: When the project is built for installing, the recommended make test will run tests while the project is not installed
A combination of the first two options: Checking for dirname(readlink("proc/self/exe")) + "/openage_version"
Issue: Even more platform-specific
This seems like the most robust option so far
The solution I finally chose is to do the decision in the Python part of the application.
There is a python module, buildsystem.setup, which does not get installed during make install.
Using this fact, I can simply
def is_in_devmode():
try:
import ..buildsystem.setup
return True
except ImportError:
return False
I've got a Python (2.x) project which contains 2 Python fragment configuration files (import config; config.FOO and so on). Everything is installed using setuptools causing these files to end up in the site-packages directory. From a UNIX perspective it would be nice to have the configuration for a software suite situated in /etc so people could just edit it without resorting to crawl into /usr/lib/python*/site-packages. On the other hand it would be nice to retain the hassle-free importing.
I've got 2 "fixes" in mind that would resolve this issue:
Create a softlink from /etc/stuff.cfg to the file in site-packages (non-portable and ugly)
Write a configuration management tool (somewhat like a registry) that edit site-packages directly (waay more work that I am willing to do).
I am probably just incapable of finding the appropriate documentation as I can't imagine that there is no mechanism to do this.
As the title asks, what is the technically proper location to store Python virtual environments on Linux operating systems according to the Linux FHS?
Stated another way that allows for a clear answer: Is it "technically correct" to separate the location of a Python virtual environment from the data files you are serving?
Note: This question differs from the closest, already-asked question I could find, as virtual-environments contain libraries, binaries, header files, and scripts.
As an added complication, I tend to write code that supports internet-accessible services. However, I don't see this as substantially differentiating my needs from scenarios in which the consumers of the service are other processes on the same server. I'm mentioning this detail in case my responses to comments include "web dev"-esque content.
For reference, I am using the following documentation as my definition of the Linux FHS: http://www.pathname.com/fhs/pub/fhs-2.3.html
I do not believe the popular virtualenv-wrapper script suggests the correct action, as it defaults to storing virtual environments in a user's home directory. This violates the implicit concept that the directory is for user-specific files, as well as the statement that "no program should rely on this location."
From the root level of the file system, I lean towards /usr (shareable, read-only data) or /srv (Data for services provided by this system), but this is where I have a hard time deciding further.
If I was to go alongside the decision of my go-to reverse proxy, that means /usr. Nginx is commonly packaged to go into /usr/share/nginx or /usr/local/nginx, however, /usr/ is supposed to be mounted read-only according to the FHS. I find this strange because I've never worked on a single project in which development happened so slowly that "unmount as read-only/remount with write, unmount/remount as read-only" was considered worth the effort.
/srv is another possible location, but is stated as the "location of the data files for particular service," whereas a Python virtual environment is more focused on libraries and binaries for what provides a service (without this differentiation, .so files would also be in srv). Also, multiple services with the same requirements could share a virtual environment, which violates the "particular" detail of the description.
I believe that part of the difficulty in choosing a correct location is because the virtual environment is an "environment," which consists of both binaries and libraries (almost like its own little hierarchy), which pushes my impression that somewhere under /usr is more conventional:
virtual-env/
├── bin ~= /usr/local : "for use by the system administrator when installing software locally"
├── include ~= /usr/include : "Header files included by C programs"
├── lib ~= /usr/lib : "Libraries for programming and packages"
└── share ~= /usr/local
With my assumptions and thoughts stated: consider the common scenario of Nginx acting as a reverse proxy to a Python application. Is it correct to place a virtual environment and source code (e.g. application.py) under /usr/local/service_name/ while using /srv for files that are changed more often (e.g. 'static' assets, images, css)?
edit: To be clear: I know why and how to use virtualenvs. I am by no means confused about project layouts or working in development environments.
As the title asks, what is the technically proper location to store
Python virtual environments on Linux operating systems according to
the Linux FHS?
Keep in mind that the Linux FHS is not really a standard, it is a set of guidelines. It is only referred to as a standard by the LSB - which is just a bunch of rules that make supporting Linux easier.
/run, /sys, /proc and /usr/local are all not part of the LFS but you see them in most linux distributions.
For me the clear choice to put virtual environments is /opt, because this location is reserved for the installation of add-on software packages.
However, on most Linux distributions only root can write to /opt, which makes this a poor choice because one of the main goals of virtual environments is to avoid being root.
So, I would recommend /usr/local (if its writable by your normal user account) - but there is nothing wrong with installing it in your home directory.
Stated another way that allows for a clear answer: Is it "technically
correct" to separate the location of a Python virtual environment from
the data files you are serving?
I'm not sure what you mean by "data files you are serving", but here are the rules for virtual environments:
Don't put them in source control.
Maintain a list of installed packages, and put this in version control. Remember that virtual environments are not exactly portable.
Keep your virtual environment separate from your source code.
Given the above, you should keep your virtual environment separate from your source code.
consider the common scenario of Nginx acting as a reverse proxy to a
Python application. Is it correct to place a virtual environment and
source code (e.g. application.py) under /usr/local/service_name/ while
using /srv for more dynamic files (e.g. 'static' assets, images)?
Static assets are not dynamic files, I think you are confusing terms.
Either way, you should do the following:
Create a user account to run that application.
Put the application files under a directory that is controlled by that user and that user alone. Typically this is the /home/username directory, but you can make this /services/servicename. Place the virtual environment as a subset of this directory, in a standard naming format. For example, I use env.
Put your static assets, such as all media files, css files, etc. in a directory that is readable by your front end server. So, typically you would make a www directory or a public_html directory.
Make sure that the user account you create for this application has write access to this asset directory, so that you are able to update files. The proxy server should not have execute permissions on this directory. You can accomplish this by changing the group of the directory to the same as that of the proxy server user. Given this, I would put this directory under /home/username/ or /services/servicename.
Launch the application using a process manager, and make sure your process manager switches the user to the one created in step 1 when running your application code.
Finally, I cannot stress this enough DOCUMENT YOUR PROCESS and AUTOMATE IT.