I'm new to this sorta stuff and can't really figure out how to solve this. I've tried searching online but can't seem to find anything please help.
So I downloaded a repo and it wants me to copy a file to a "/opt/TradingBot/data" folder. For the life of me, I can't find where this opt/ folder is. At first, I thought it was the directory so I added another data folder but the script still isn't running properly. Please help.
you may find clarity in reading https://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/opt.html
This directory is reserved for all the software and add-on packages that are not part of the default installation. For example, StarOffice, Kylix, Netscape Communicator and WordPerfect packages are normally found here. To comply with the FSSTND, all third party applications should be installed in this directory. Any package to be installed here must locate its static files (ie. extra fonts, clipart, database files) must locate its static files in a separate /opt/'package' or /opt/'provider' directory tree (similar to the way in which Windows will install new software to its own directory tree C:\Windows\Progam Files\"Program Name"), where 'package' is a name that describes the software package and 'provider' is the provider's LANANA registered name.
I'm climbing my learning curve in Python and try to understand where to put everything.
I originally have a python module in a folder and then a sub folder src, in this src folder I will then have my main source files say main.py then I will have models folder storing my models codes.
/myproject/src/main.py
/myproject/src/models/a-model.py
/myproject/src/models/b-model.py
So my main will import the model like this:
from models.a-model import a
Then when I package the zip file I just zip the myproject folder with that folder structure and deploy and everything is fine.
Now I have another new module doing something different but need to use the same models.
I can easily duplicate them all and code separately and deploy. But I would like to share the codes to the models, so that when one model changes, I only need to update once, instead of 2 places.
My new module is like
/mynew/src/main-b.py
/mynew/src/models/a-model.py
/mynew/src/models/b-model.py
What is the best practise to do this?
Do I put like this?
/myproject/src/main.py
/mynew/src/main-b.py
/models/a-model.py
/models/b-model.py
And then update the import?
But I have doubt how do I deploy? Do I also have to setup the same folder structures?
One would be adding /myproject/src/models to the environment variable PYTHONPATH. Python adds the directories listed in PYTHONPATH environment variable to sys.path, the list of directories where Python searches when you try to import something. This is bad, because modifying PYTHONPATH has its own side effects, fortunately, virtual environments provide a way to get around those side effects.
Alternatively and much better you could add your modules to site-packages directory, site-packages is added to sys.pathby default, this obviates the need to modifyPYTHONPATH. To locate thesite-packages` directory, refer to this page from Python Documentation: Installing Python Modules (Legacy version).
You could also use LiClipse IDE which comes with Pydev already installed. Create source a folder from the IDE and link your previous project with your newer project. When you link your projects the IDE adds the source folders of your older project to the PYTHONPATH of your newer project and thus Python will be able to locate your modules.
I've followed pycharm documentation to set up the IDE to resolve imports. However it seems that each folder containing *.py files needs to be explicitly added as 'sources root' in order for the IDE to resolve all references. Can this be done recursively from a root folder?
Is this the correct way to get the IDE to resolve all codebase references, or have I not set up my project structure correctly?
I have already followed other methods for resolving references in the IDE here and here but to no avail. It seems that the IDE will only resolve them if I manually add each folder as a 'sources root'. Without the recursive functionality, large codebases will be laborious when setting up the IDE!
If you have not used __init__.py, you should add it in each sub-directory to mark it as a package. By adding it, Python will treat the directories as containing packages making you modules visible to other directories and therefore able to be imported.
It's possible to select all folders in root a at once by going to Project Structure in Preferences then selecting all and clicking the blue Sources icon at the top. This was at least doable with PyCharm CE 2021.2 running on Big Sur.
I've got a Python (2.x) project which contains 2 Python fragment configuration files (import config; config.FOO and so on). Everything is installed using setuptools causing these files to end up in the site-packages directory. From a UNIX perspective it would be nice to have the configuration for a software suite situated in /etc so people could just edit it without resorting to crawl into /usr/lib/python*/site-packages. On the other hand it would be nice to retain the hassle-free importing.
I've got 2 "fixes" in mind that would resolve this issue:
Create a softlink from /etc/stuff.cfg to the file in site-packages (non-portable and ugly)
Write a configuration management tool (somewhat like a registry) that edit site-packages directly (waay more work that I am willing to do).
I am probably just incapable of finding the appropriate documentation as I can't imagine that there is no mechanism to do this.
As the title asks, what is the technically proper location to store Python virtual environments on Linux operating systems according to the Linux FHS?
Stated another way that allows for a clear answer: Is it "technically correct" to separate the location of a Python virtual environment from the data files you are serving?
Note: This question differs from the closest, already-asked question I could find, as virtual-environments contain libraries, binaries, header files, and scripts.
As an added complication, I tend to write code that supports internet-accessible services. However, I don't see this as substantially differentiating my needs from scenarios in which the consumers of the service are other processes on the same server. I'm mentioning this detail in case my responses to comments include "web dev"-esque content.
For reference, I am using the following documentation as my definition of the Linux FHS: http://www.pathname.com/fhs/pub/fhs-2.3.html
I do not believe the popular virtualenv-wrapper script suggests the correct action, as it defaults to storing virtual environments in a user's home directory. This violates the implicit concept that the directory is for user-specific files, as well as the statement that "no program should rely on this location."
From the root level of the file system, I lean towards /usr (shareable, read-only data) or /srv (Data for services provided by this system), but this is where I have a hard time deciding further.
If I was to go alongside the decision of my go-to reverse proxy, that means /usr. Nginx is commonly packaged to go into /usr/share/nginx or /usr/local/nginx, however, /usr/ is supposed to be mounted read-only according to the FHS. I find this strange because I've never worked on a single project in which development happened so slowly that "unmount as read-only/remount with write, unmount/remount as read-only" was considered worth the effort.
/srv is another possible location, but is stated as the "location of the data files for particular service," whereas a Python virtual environment is more focused on libraries and binaries for what provides a service (without this differentiation, .so files would also be in srv). Also, multiple services with the same requirements could share a virtual environment, which violates the "particular" detail of the description.
I believe that part of the difficulty in choosing a correct location is because the virtual environment is an "environment," which consists of both binaries and libraries (almost like its own little hierarchy), which pushes my impression that somewhere under /usr is more conventional:
virtual-env/
├── bin ~= /usr/local : "for use by the system administrator when installing software locally"
├── include ~= /usr/include : "Header files included by C programs"
├── lib ~= /usr/lib : "Libraries for programming and packages"
└── share ~= /usr/local
With my assumptions and thoughts stated: consider the common scenario of Nginx acting as a reverse proxy to a Python application. Is it correct to place a virtual environment and source code (e.g. application.py) under /usr/local/service_name/ while using /srv for files that are changed more often (e.g. 'static' assets, images, css)?
edit: To be clear: I know why and how to use virtualenvs. I am by no means confused about project layouts or working in development environments.
As the title asks, what is the technically proper location to store
Python virtual environments on Linux operating systems according to
the Linux FHS?
Keep in mind that the Linux FHS is not really a standard, it is a set of guidelines. It is only referred to as a standard by the LSB - which is just a bunch of rules that make supporting Linux easier.
/run, /sys, /proc and /usr/local are all not part of the LFS but you see them in most linux distributions.
For me the clear choice to put virtual environments is /opt, because this location is reserved for the installation of add-on software packages.
However, on most Linux distributions only root can write to /opt, which makes this a poor choice because one of the main goals of virtual environments is to avoid being root.
So, I would recommend /usr/local (if its writable by your normal user account) - but there is nothing wrong with installing it in your home directory.
Stated another way that allows for a clear answer: Is it "technically
correct" to separate the location of a Python virtual environment from
the data files you are serving?
I'm not sure what you mean by "data files you are serving", but here are the rules for virtual environments:
Don't put them in source control.
Maintain a list of installed packages, and put this in version control. Remember that virtual environments are not exactly portable.
Keep your virtual environment separate from your source code.
Given the above, you should keep your virtual environment separate from your source code.
consider the common scenario of Nginx acting as a reverse proxy to a
Python application. Is it correct to place a virtual environment and
source code (e.g. application.py) under /usr/local/service_name/ while
using /srv for more dynamic files (e.g. 'static' assets, images)?
Static assets are not dynamic files, I think you are confusing terms.
Either way, you should do the following:
Create a user account to run that application.
Put the application files under a directory that is controlled by that user and that user alone. Typically this is the /home/username directory, but you can make this /services/servicename. Place the virtual environment as a subset of this directory, in a standard naming format. For example, I use env.
Put your static assets, such as all media files, css files, etc. in a directory that is readable by your front end server. So, typically you would make a www directory or a public_html directory.
Make sure that the user account you create for this application has write access to this asset directory, so that you are able to update files. The proxy server should not have execute permissions on this directory. You can accomplish this by changing the group of the directory to the same as that of the proxy server user. Given this, I would put this directory under /home/username/ or /services/servicename.
Launch the application using a process manager, and make sure your process manager switches the user to the one created in step 1 when running your application code.
Finally, I cannot stress this enough DOCUMENT YOUR PROCESS and AUTOMATE IT.