Python script in systemd: virtual environment or real environment - python

I have been trying to run a python script on start-up (on a Pi). I initially did this via an .sh script triggered by cron.
Posting with a problem on StackExchange Pi (https://raspberrypi.stackexchange.com/questions/110868/parts-of-code-not-running-when-autostarting-script-in-crontab) the suggestion is to use systemd.
The person helping me there has suggested not using a virtual environment when executing the Python script (they note their limited familiarity with Python), and using the real environment instead. But other resources strongly suggest the use of a virtual environment (e.g. https://docs.python.org/3/tutorial/venv.html).
In the hope of setting this up correctly could anyone weigh in on the correct approach?

Use the virtual environment. I don't see any reason not to. At some point you might want to have multiple Python applications run at the same time on that system, and these applications might require different versions of the same dependency and then you would be back to square one, so... Use the virtual environment.
When configuring systemd, crontab, or whatever, make sure to use the python binary that is placed inside the virtual environment's bin directory, so that there is no need to activate the virtual environment:
/path/to/venv/bin/python -m my_executable_module
/path/to/venv/bin/python /path/to/my_script.py
/path/to/venv/bin/my_executable_script

systemd is going to try to run your script on startup so your virtual environment will not have been activated yet. You can (maybe) avoid that issue by telling systemd to use the python in the virtualenv's bin, with the appropriate environment variables. Or you can activate as a pre-run step for that script's launch in systemd. Maybe.
But on balance I'd make it easy on systemd and your OS and ignore the virtualenv absolutists. Get the script to work on your dev machine using virtualenv all you want, but then prep systemd to use the global python, with suitable packages installed. You can always use virtualenvs on that pi, for scripts that don't have to work with systemd. Systemd doesn't always have the clearest error messages.
(If you need to import custom modules you could inject directories into sys.path in your script. This could even avoid installing packages, for the global Python, entirely.)
This answer is certainly opinion-based.

Related

Importing python packages in Ubuntu server

I have a Ubuntu server with restricted access. There I will be hosting my application.
I trying to run Python scripts which were working with the default packages provided by the server. I want to work with numpy and other modules.
As I cannot install or download or do anything, I created a python server in my local machine (WINDOWS) using WSL to emulate the Linux file system and copied the python environment files to the application directory and deployed in cloud.
The problem is no matter in whatever way I try I cannot import numpy (or any module which I copied). I moved all the site-packages to the location of my Python script (As the current script's path will be there in the system path) and tried to import but no luck.
Please help me with crack this in any possible or impossible way.
I am trying to achieve this for the past 6 days and cannot do it.
Please, I have to achieve this at any cost. I have attached my latest structure.
Thank you in advance.
My Folder structure screenshot:
EDIT:
Ok. Let me get this straight. I have a Linux server (Ubuntu 18.04) where I am hosting an application. From that application, I am calling python scripts for some machine learning purposes. It is restricted server and I cannot access it. The only way that I found out the Linux distro version is through Java code by calling some terminal commands using "ProcessBuilder". As the server is highly restricted I cannot run any of the Linux commands like echo, set, export, sudo, wget/curl,...etc., Since, python3 is already provided by Linux (by default) I am using that python3 command to call my python scripts (from Java code using "ProcessBuilder") and execute them.
If it is a normal script (if I am using python standard libraries) it is working fine. In one of the scripts I am using "numpy". So, I want to import that module. I am doing the development in a windows environment. So, to emulate the Linux file system for importing packages I created a virtual environment in WSL with same Ubuntu version and installed numpy and then replaced all the symlinks inside those packages with the required files. Then I copied the entire environment and pasted in my resources directory (which is in windows environment) and deployed. No luck.
So, I made a zip file for only "site-packages" folder inside that environment. Then I copied the zip file and pasted in my resources folder and deployed. No luck. The error that I always see is "numpy.core._multiarray_umath". All the articles and in GitHub also tell us to re-install the package. But, I cannot install. I don't have any such access.
How can I import numpy without installation? If there is any work around to achieve this please explain, I will do it. Even if it is harder, complex and time-consuming I am okay with it. I want to achieve this.
Let me preface this with:
a warning to please check the AUP (acceptable use policy) of the server you are using, and/or contact the server administrator to make sure you are not violating any rules.
I can think of quite a few reasons why this won't work. If it doesn't, then there may still be workarounds, but they'll be technically complex.
So if I'm understanding you correctly:
You have very limited access to the server; basically only the ability to upload (apparently) and run Java code.
You've also been able to upload Python code and run it through your Java code through ProcessBuild.
You do not have access to log in to a shell, execute arbitrary command other than through ProcessBuild, etc.
Of course, you do not have the ability to install site-packages into the system Python environment.
So ultimately, what you'll probably need to do is something like:
Create a Python3 virtual environment (which doesn't seem to be what you are actually doing) on WSL. By a "Python3 virtual environment", I mean venv, which allows you to create a user-level (not system-level) directory with your packages.
So something like (from inside your project directory):
python3 -v venv venv
source ./venv/bin/activate
Your path will be adjusted so that your python3 and pip3 commands will be found in the venv path. pip3 install numpy will install it into this virtual environment (not the global/system Python).
Upload that entire venv directory to the server. You seem to have some way of doing this already.
You're going to have to have some way of running the Bash shell through ProcessBuilder. Since you have the ability to run python3 through ProcessBuilder, I'm kind of assuming that you will be able to do this as well.
You'll need to (through ProcessBuild) activate the virtual environment on the server, <path_to_project>/venv/bin/activate and, in the same Bash shell run your code.
This will look something like:
bash -c "source ./venv/bin/activate; python3 main.py"

How to distribute a Python virtual environment?

I have created a Python virtual environment using virtualenv for Python 2.7.18 64 bit and another virtual environment using venv for Python 3.5.4 64 bit.
I was hoping to be able to commit these items into version control so that other users of the project could access them without having to setup a Python environment themselves. Another issue is that some of the work stations will not have access to internet to easily create a virtual environment from scratch, so using a requirements.txt file is not a valid solution.
It seems like there are a fair amount of issues preventing a virtual environment (whether using virtualenv or venv) from being easily 'copied' and executed on another system.
Is what I am describing even possible? I have tried tinkering with the 'activate' scripts to remove some of the hard coded pathing but that doesn't seem to do the trick.
Thanks
Have you considered using Docker? If you just have an image (or use docker-compose for multiple images), the user will not need to start a virtual environment.

Do I need to have Python installed on my machine to run a code in a virtual environment?

I want to create a python virtual environment in a Windows network folder, and then activate and use this venv on another computer. My question is, do I need to have python installed on all these machines that i want to use this virtual environment?
If so, why do I need to install python everywhere if it's already inside this venv i will create, along with all the necessary packages?
Virtual environments are not designed to be portable. For instance, if you have entry points installed then moving them to another machine would break their shebang lines. And even if you do it on your local machine there's no guarantee something else wasn't structured to be directory-specific such as your username! Take in consideration that when creating Python venv, it will be pointed to your python path where your username is part of the path, so its not a good idea. Although why not upload py files to the cloud, share via Bluetooth to phone or copy-paste to a usb stick.

Python, virtualenv: telling IDE to run the debug session in virtualenv

My project's running fine in virtualenv. Unfortunately, I can't yet run it via my IDE (Eric) because of the import troubles. It stands to reason, as I never told the IDE anything about virtualenv.
I know the drill ($ source project/bin/activate etc.), but lack general understanding. What constitutes "running inside virtualenv"? What IDE options might be relevant?
I think the only required setting to run or debug code is path to python interpreter.
Relevant IDE options could be SDK or Interpreter settings.
Note that you should run not default python (eg. /usr/bin/python) but python binary in your virtual environment (eg /path/to/virtualenv/bin/python)
Also there are some environment variables set by activate, but i think they aren't needed when you point to virtualenv python binary directly.
So, again, what activate does is only environment variables setup: at least, it modifies system $PATH in a way that python and pip commands points to executable files under path/to/virtaulenv/bin directiry.
As far as I know it is possible to run your script/project using your virtualenv simply by calling /path/to/your/venv/python your_script.py. To install new packages in your venv for example, you would run /path/to/your/venv/pip install some_package.
I guess the main advantage of "runnnig inside virtualenv" would be not being concerned about having to inform the location of python packages/executable everytime you want to run some script. But I lack general understanding too.
I usually install a virtualenv with the --no-site-packages option in order to have a "clean" installation of python.
--- EDIT ---
The second answer of this discussion has a nice explanation.
Simply look at the project/bin/activate, everything you need is there to set up the appropriate search.
Usually the most important path is the PYTHONPATH which should point to the site-packages.

Running Python from a virtualenv with Apache/mod_wsgi, on Windows

I'm trying to set up WAMP server. I've got Apache working correctly, and I've installed mod_wsgi without a hitch.
Problem is, I'm using virtual environments (using virtualenv) for my projects. So obviously, mod_wsgi is having problems locating my installation of Django.
I'm trying to understand how I can get mod_wsgi to work well with the virtualenvs. The documentation seems to think this isn't possible:
Note that the WSGIPythonHome directive can only be used on UNIX systems and is not available on Windows systems. This is because on Windows systems the location of the Python DLL seems to be what dictates where Python will look for the Python library files. It is not known at this point how one could create a distinct baseline environment independent of the main Python installation on Windows.
From here: mod_wsgi + virtualenv docs.
Does anyone have some idea on how to make this work?
You can activate the environment programmatically from Python adding this to your .wsgi file before importing anything else.
From virtualenv's docs:
Sometimes you can't or don't want to
use the Python interpreter created by
the virtualenv. For instance, in a
mod_python or mod_wsgi environment,
there is only one interpreter.
Luckily, it's easy. You must use the
custom Python interpreter to install
libraries. But to use libraries, you
just have to be sure the path is
correct. A script is available to
correct the path. You can setup the
environment like:
activate_this = '/path/to/env/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))
This will change sys.path and even
change sys.prefix, but also allow you
to use an existing interpreter. Items
in your environment will show up first
on sys.path, before global items.
However, this cannot undo the
activation of other environments, or
modules that have been imported. You
shouldn't try to, for instance,
activate an environment before a web
request; you should activate one
environment as early as possible, and
not do it again in that process.

Categories