I am writing a new Python application that I intend to distribute to several colleagues. Instead of my normal carefree attitude of just having everything self contained and run inside a folder in my home directory, this time I would like to broaden my horizon and actually try to utilize the Linux directory structure as it was intended (at least somewhat). Can you please read my breakdown below and comment and or make recommendations if this is not correct.
Lets call the application "narf"
/usr/narf - Install location for the actual python file(s).
/usr/bin/narf - Either a softlink to the main python file above or use this location instead.
/etc/narf - Any configuration files for app narf.
/var/log/narf - Any log files for app narf.
/usr/lib - Any required libraries for app narf.
/run/narf - Any persistent (across reboot), but still temp files for app narf.
/tmp/narf - Very temp files for app narf that go away with reboot
I assume I should stick to using /usr/X (for example /usr/bin instead of just /bin) since my application is not system critical and a mere addon.
I currently use Ubuntu 16 LTS, however part of this is intended as a way to try to standardize my app for any popular Linux distro.
Thanks for the help.
* UPDATE *
I think I see the answer to at least part of my question. Looking in /usr, I now see that it is a pretty barebones directory and almost akin to user level root directory (ie has bin, lib, local, sbin, etc. but thats pretty much all). This leads me to believe my application should absolutely NOT live in /usr, and ONLY in /usr/bin.
You'd be better off putting your entire application into /opt. See here: http://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/Linux-Filesystem-Hierarchy.html#opt
Then put a soft link to the executable into /usr/local/bin. see here: https://unix.stackexchange.com/a/8658/219043
I wouldn't worry about the rest.
Your application should not live in the /usr/ directory. If you want to package your application into a distribution, please refer to these guides:
Packaging and Distributing Projects
How To Package And Distribute Python Applications
You can for sure write to unix directories within your application when appropriate, but keep in mind there are mechanisms built into setup.py that help with the installation side of this (for example).
If this is something private, I'd suggest making this a private repository on GitHub and have your colleagues install it through pip.
Related
I've made this question because I had to go through the whole process of creating my own application using Apple's somewhat lacking documentation, and without the use of py2app. I wanted to create the whole application structure so I know exactly what was inside, as well as create an installer for it. The latter of these is still a mystery, so any additional answers with information on making a custom installer would be appreciated. As far as the actual "bundle" structure goes, however, I think I've managed to get the basics down. See the answer below.
Edit: A tutorial has been linked at the end of this answer on using PyInstaller; I don't know how much it helps as I haven't used it yet, but I have yet to figure out how to make a standalone Python application without the use of a tool like this and it may just be what you're looking for if you wish to distribute your application without relying on users knowing how to navigate their Python installations.
A generic application is really just a directory with a .app extension. So, in order to build your application, just make the folder without the extension first. You can rename it later when you're finished putting it all together. Inside this main folder will be a Contents folder, which will hold everything your application needs. Finally, inside Contents, you will place a few things:
Info.plist
MacOS
Resources
Frameworks
Here you can find some information on how to write your Info.plist file. Basically, this is where you detail information about your application.
Inside the MacOS you want to place your main executable. I'm not sure that it matters how you write it; at first, I just had a shell script that called python3 ./../Resources/MyApp.py. I didn't think this was very neat though, so eventually I called the GUI from a Python script which became my executable (I used Tkinter to build my application's GUI, and I wrote several modules which I will get to later). So now, my executable was a Python script with a shebang pointing to the Python framework in my application's Frameworks folder, and this script just created an instance of my custom Tk() subclass and ran the mainloop. Both methods worked, though, so unless someone points out a reason to choose one method over the other, feel free to pick. The one thing that I believe is necessary, is that you name your executable the SAME as your application (before adding the .app). That, I believe, is the only way that MacOS knows to use that file as your application's executable. Here is a source that describes the bundle structure in more detail; it's not a necessary read unless you really want to get into it.
In order to make your executable run smoothly, you want to make sure you know where your Python installation is. If you're like me, the first thing you tried doing on your new Mac was open up Terminal and type in python3. If this is the case, this prompted you to install the Xcode Command Line tools, which include an installation of Python 3.8.2 (most recent on Xcode 12). Then, this Python installation would be located at /usr/bin/python3, although it's actually using the Python framework located at
/Applications/Xcode.app/Developer/Library/Frameworks/Python3.framework/Versions/3.8/bin/python3
I believe, but am NOT CERTAIN, that you could simply make a copy of this framework and add it to your Frameworks folder in order to make the app portable. Make a copy of the Python3.framework folder, and add it to your app's Frameworks folder. A quick side note to be wary of; Xcode comes packaged with a lot of useful tools. In my current progress, the tool I am most hurting for is the Fortran compiler (that I believe comes as a part of GCC), which comes with Xcode. I need this to build SciPy with pip install scipy. I'm sure this is not the only package that would require tools that Xcode provides, but SciPy is a pretty popular package and I am currently facing this limitation. I think by copying the Python framework you still lose some of the symlinks that point to Xcode tools, so any additional input on this would be great.
In any case, locate the Python framework that you use to develop your programs, and copy it into the Frameworks folder.
Finally, the Resources folder. Here, place any modules that you wrote for your Python app. You also want to put your application's icon file here. Just make sure you indicate the name of the icon file, with extension, in the Info.plist file. Also, make sure that your executable knows how to access any modules you place in here. You can achieve this with
import os
os.chdir('./../Resources')
import MyModules
Finally, make sure that any dependencies your application requires are located in the Python framework site-packages. These will be located in Frameworks/Python3.framework/Versions/3.X.Y/lib/python3.x.y/site-packages/. If you call this specific installation of Python from the command line, you can use path/to/application/python3 -m pip install package and it should place the packages in the correct folder.
P.S. As far as building the installer for this application, there are a few more steps needed before your application is readily downloaded. For instance, I believe you need to use the codesign tool in order to approve your application for MacOS Gatekeeper. This requires having a developer license and manipulating certificates, which I'm not familiar with. You can still distribute the app, but anyone who downloads it will have to bypass the security features manually and it will seem a bit sketchy. If you're ready to build the installer (.pkg) file, take a look at the docs for productbuild; I used it and it works, but I don't yet know how to create custom steps and descriptions in the installer.
Additional resources:
A somewhat more detailed guide to the anatomy of a macOS app
A guide I found, but didn't use, on using codesign to get your app past Gatekeeper
A RealPython tutorial I found on using PyInstaller to build Python-based applications for all platforms
My project has a global asset directory (/usr/share/openage) that contains various files (graphics, texts, ...) and an user-specific asset directory (~/.openage) that allows the user to overwrite some of the global assets/add their own.
It is my understanding that when building, you pass the install prefix to the build system (e.g. ./configure --install-prefix=/usr), which will in turn generate a file (e.g. configure.h) which makes the install prefix available for the code (e.g. #define INSTALL_PREFIX="/usr"). The code will then look for its assets in INSTALL_PREFIX "share/openage". So far, so good.
However, when the project hasn't been installed yet (which is true in 99.9% of cases for me as the developer), the directory /usr/share/openage obviously doesn't exist yet; instead, I would want to use ./assets in the current directory. Even worse, if the installed directory exists (e.g. from an independent, earlier install), it might data incompatible to the current dev version.
Similarily, if running the installed project, I'd want it to use the user's home directory (~/.openage) as user asset directory, while in "devmode", it should use a directory like "./userassets".
It gets even worse when thinking about non-POSIX platforms. On Windows, INSTALL_PREFIX is useless since programs can be installed basically anywhere (do programs simply use the current working directory or as the asset directory?), and I don't have the slightest idea how Mac handles this.
So my question is: Is there a generally accepted "best way to do this"? Surely, hundreds of projects (basically every single project that has an asset directory) have dealt with this problem one way or on other.
Unfortunately, I don't even know what to google for. I don't even know how to tag this question.
Current ideas (and associated issues) include:
Looking for a file openage_version, which only exists in the source directory, in cwd. If it exists, assume that the project is currently uninstalled.
Issue: Even in "development mode", cwd might not always be the project root directory.
Checking whether readlink("/proc/self/exe") starts with INSTALL_PREFIX
Issue: platform-specific
Issue: theoretically, the project root directory could be in /usr/myweirdhomedirectory/git/openage
Forcing developers to specify an argument --not-installed, or to set an environment variable, OPENAGE_INSTALLED=0
Issue: Inconvenient
Issue: Forgetting to specify the argument will lead to confusion when the wrong asset directory is used
During development, call ./configure with a different INSTALL_PREFIX
Issue: When the project is built for installing, the recommended make test will run tests while the project is not installed
A combination of the first two options: Checking for dirname(readlink("proc/self/exe")) + "/openage_version"
Issue: Even more platform-specific
This seems like the most robust option so far
The solution I finally chose is to do the decision in the Python part of the application.
There is a python module, buildsystem.setup, which does not get installed during make install.
Using this fact, I can simply
def is_in_devmode():
try:
import ..buildsystem.setup
return True
except ImportError:
return False
I've got a Python (2.x) project which contains 2 Python fragment configuration files (import config; config.FOO and so on). Everything is installed using setuptools causing these files to end up in the site-packages directory. From a UNIX perspective it would be nice to have the configuration for a software suite situated in /etc so people could just edit it without resorting to crawl into /usr/lib/python*/site-packages. On the other hand it would be nice to retain the hassle-free importing.
I've got 2 "fixes" in mind that would resolve this issue:
Create a softlink from /etc/stuff.cfg to the file in site-packages (non-portable and ugly)
Write a configuration management tool (somewhat like a registry) that edit site-packages directly (waay more work that I am willing to do).
I am probably just incapable of finding the appropriate documentation as I can't imagine that there is no mechanism to do this.
As the title asks, what is the technically proper location to store Python virtual environments on Linux operating systems according to the Linux FHS?
Stated another way that allows for a clear answer: Is it "technically correct" to separate the location of a Python virtual environment from the data files you are serving?
Note: This question differs from the closest, already-asked question I could find, as virtual-environments contain libraries, binaries, header files, and scripts.
As an added complication, I tend to write code that supports internet-accessible services. However, I don't see this as substantially differentiating my needs from scenarios in which the consumers of the service are other processes on the same server. I'm mentioning this detail in case my responses to comments include "web dev"-esque content.
For reference, I am using the following documentation as my definition of the Linux FHS: http://www.pathname.com/fhs/pub/fhs-2.3.html
I do not believe the popular virtualenv-wrapper script suggests the correct action, as it defaults to storing virtual environments in a user's home directory. This violates the implicit concept that the directory is for user-specific files, as well as the statement that "no program should rely on this location."
From the root level of the file system, I lean towards /usr (shareable, read-only data) or /srv (Data for services provided by this system), but this is where I have a hard time deciding further.
If I was to go alongside the decision of my go-to reverse proxy, that means /usr. Nginx is commonly packaged to go into /usr/share/nginx or /usr/local/nginx, however, /usr/ is supposed to be mounted read-only according to the FHS. I find this strange because I've never worked on a single project in which development happened so slowly that "unmount as read-only/remount with write, unmount/remount as read-only" was considered worth the effort.
/srv is another possible location, but is stated as the "location of the data files for particular service," whereas a Python virtual environment is more focused on libraries and binaries for what provides a service (without this differentiation, .so files would also be in srv). Also, multiple services with the same requirements could share a virtual environment, which violates the "particular" detail of the description.
I believe that part of the difficulty in choosing a correct location is because the virtual environment is an "environment," which consists of both binaries and libraries (almost like its own little hierarchy), which pushes my impression that somewhere under /usr is more conventional:
virtual-env/
├── bin ~= /usr/local : "for use by the system administrator when installing software locally"
├── include ~= /usr/include : "Header files included by C programs"
├── lib ~= /usr/lib : "Libraries for programming and packages"
└── share ~= /usr/local
With my assumptions and thoughts stated: consider the common scenario of Nginx acting as a reverse proxy to a Python application. Is it correct to place a virtual environment and source code (e.g. application.py) under /usr/local/service_name/ while using /srv for files that are changed more often (e.g. 'static' assets, images, css)?
edit: To be clear: I know why and how to use virtualenvs. I am by no means confused about project layouts or working in development environments.
As the title asks, what is the technically proper location to store
Python virtual environments on Linux operating systems according to
the Linux FHS?
Keep in mind that the Linux FHS is not really a standard, it is a set of guidelines. It is only referred to as a standard by the LSB - which is just a bunch of rules that make supporting Linux easier.
/run, /sys, /proc and /usr/local are all not part of the LFS but you see them in most linux distributions.
For me the clear choice to put virtual environments is /opt, because this location is reserved for the installation of add-on software packages.
However, on most Linux distributions only root can write to /opt, which makes this a poor choice because one of the main goals of virtual environments is to avoid being root.
So, I would recommend /usr/local (if its writable by your normal user account) - but there is nothing wrong with installing it in your home directory.
Stated another way that allows for a clear answer: Is it "technically
correct" to separate the location of a Python virtual environment from
the data files you are serving?
I'm not sure what you mean by "data files you are serving", but here are the rules for virtual environments:
Don't put them in source control.
Maintain a list of installed packages, and put this in version control. Remember that virtual environments are not exactly portable.
Keep your virtual environment separate from your source code.
Given the above, you should keep your virtual environment separate from your source code.
consider the common scenario of Nginx acting as a reverse proxy to a
Python application. Is it correct to place a virtual environment and
source code (e.g. application.py) under /usr/local/service_name/ while
using /srv for more dynamic files (e.g. 'static' assets, images)?
Static assets are not dynamic files, I think you are confusing terms.
Either way, you should do the following:
Create a user account to run that application.
Put the application files under a directory that is controlled by that user and that user alone. Typically this is the /home/username directory, but you can make this /services/servicename. Place the virtual environment as a subset of this directory, in a standard naming format. For example, I use env.
Put your static assets, such as all media files, css files, etc. in a directory that is readable by your front end server. So, typically you would make a www directory or a public_html directory.
Make sure that the user account you create for this application has write access to this asset directory, so that you are able to update files. The proxy server should not have execute permissions on this directory. You can accomplish this by changing the group of the directory to the same as that of the proxy server user. Given this, I would put this directory under /home/username/ or /services/servicename.
Launch the application using a process manager, and make sure your process manager switches the user to the one created in step 1 when running your application code.
Finally, I cannot stress this enough DOCUMENT YOUR PROCESS and AUTOMATE IT.
I started a new Python project and I want to have a good structure from the beginning. I'm reading some convention Python guides but I don't find any info about how the main script must be named. Is there any rules for this? Is there any other kind of convention for folders or text files inside the project (like readme files)?
By the way, I'm programming a client-server app so there is no way for this to become a package (at least in the way a think a package is).
If you want to package your application to allow a ZIP file containing it or its directory to be passed as an argument to the python interpreter to run the application, name your main script __main__.py. If you don't care about being able to do this (and most python applications do not), name it whatever you want.
No such rule exists for python main script which starts your application. There are coding guidelines (PEP8) which you can follow to keep your code clean though.
You can check existing python applications which are easily available. May be open source/free software projects e.g yum (on rpm based distros) command, lots of python apps (you can checkout them from publicly available source code management systems e.g git repo) etc. You can check basic principles they follow. But there are no constraints as such.