Permission denied for virtualenv python when using systemd - python

I am trying to run my Python script as a system service. For this I have installed all necessary libraries in a virtualenv and have created the following service in /usr/lib/systemd/system:
[Unit]
Description=Desc
After=network.target
[Service]
Type=simple
User=<my-user>
Restart=on-abort
WorkingDirectory=/home/<my-user>
ExecStart=/home/<my-user>/.virtualenvs/myvenv/bin/python /home/<my-user>/workspacePython/test.py
Environment="PATH=/home/<my-user>/.virtualenvs/myvenv/bin"
[Install]
WantedBy=multi-user.target
But when I try to run it, it fails with:
Okt 07 20:45:49 fedora systemd[40808]: test.service: Failed at step EXEC spawning /home/<my-user>/.virtualenvs/myvenv/bin/python: Permission denied
The access rights for the python binary are rather permissive (777), so I am a bit confused how this comes about.
My Question is: How do I get to start the script within the virtualenv with systemd?

I had a similar issue on a service I created this morning.
Systemd does not seem to like WorkingDirectory=/home/<my-user> or absolute paths including /home in either case it will fail with the obscure "Permission denied" error.
The fix is to either replace references of "/home/" with ~ or move things outside the home path e.g /opt/<app-name>.

Related

Get vim on a remote server without system administration permissions

I have been given a profile (with /home directory) on a remote Linux server to work on projects that need powerful computing resources. I'd like to use Vim to edit code (mostly python) on the remote server as it can be run through a shell and doesn't require a slow GUI exchange. Currently, the Debian distribution on the remote server has a barebones vi installed and no Vim. Is there a way to install a Vim (perhaps in my home directory?) without superuser permissions?
You should be able to install vim locally, for example downloaded from a binary, or compiled from source with
git clone https://github.com/vim/vim.git
cd vim/src
make
From there, you can simply add the directory you compiled it to to PATH.

How to improve Python script performances when launched from systemd?

I have a Raspberry Pi (there is a Debian-based distro) which needs to keep running a service based on a Python script.
What I have done so far has been to create the .service file added to the /lib/systemd/system/ folder, now it is run automatically at the system boot and it is able to be restarted if any crash occurs, furthermore, a little logging system has been added based on syslog.
The content of the .service file looks like this so far:
[Unit]
Description=My_Service
After=network.target network-online.target
After=local-fs.target
[Service]
Type=simple
Restart=always
ExecStartPre=/bin/mkdir -p /home/user/log
ExecStart=/usr/local/bin/python3 -u /home/user/my_service.py
SyslogIdentifier=My_Service
StandardOutput=syslog
StandardError=syslog
[Install]
WantedBy=multi-user.target
Now I've noticed that the script is slighlty less performant than when it is run by terminal.
Because it is the only one script that the system should keep running, I was trying to set it with the highest priority but I am not sure how to do that.
So far I've added the following lines in the [Service] section but I'm not sure if it is ok or if it could be the best practice.
CPUSchedulingPolicy=rr
CPUSchedulingPriority=99
Nice=-20
The question is: How can I set the maximum priority and maximum usage of the system resources for such service in order to maximise its performances?
I'm also trying to disable other system services which are not useful for my embedded system, such as the bluetooth.service, could this kind of work be a good practice?
-- Edit --
No solutions found yet.
To run python script as service I recommend to use Supervisor.
https://rcwd.dev/long-lived-python-scripts-with-supervisor.html

Django manage.py command having SyntaxError on ElasticBeanstalk

I'm using AWS for first time, and after uploading my Django project I wanted to know how to backup the information of the DB into a file to be able to modify the data in case I have to modify the models of my project (still on development) and keep having some population data.
I thought about the django dumpdata command, so to be able to execute it on EB through the CLI I did the following (and here is where maybe I'm doing something wrong):
- eb ssh
- sudo -s
- cd /opt/python/current/app/
- python manage.py dumpdata --natural-foreign --natural-primary -e contenttypes -e auth.Permission --indent 4 > project_dump.json
For what I understand, the first command is just to access to the SSH on Elastic Beanstalk.
The second one is to have root permissions inside the linux server to evade problems creating and opening files, etc.
The third one is just to access where the current working aplication is.
And the last is the command I have to use to dump all the data "human friendly" without restrictions to be able to use it in any other new database.
I have to say that I tried this last command on my local machine and
worked as expected without any error or warning.
So, the issue I'm facing here is that when I execute this last command, I get the following error:
File "manage.py", line 14
) from exc
^
SyntaxError: invalid syntax
Also I tried to skip the sudo -s to just use the permissions of the user I'm using to log on the ssh, but got: -bash: project_dump.json: Permission denied. So that is why I thought using the sudo command would help here.
In addition, I followed this known tutorial to deploy
Django+PostgreSQL on EB, so the user I'm using to access to ssh is
one in a group with AdministratorAccess permissions.
Before Trying all of this, I also looked for a way of having this information directly from AWS-RDS, but I only found a way of having a backup restored, but without being able to modify the content manually, so is not what I really need.
As on your local environment, you need to run your manage.py commands inside your correct python virtualenv and make sure the environment variables like RDS_USERNAME and RDS_PASSWORD are set. To do that, you need to:
Activate your virtualenv
Source your environment variables
As described at the end of the tutorial you mentioned, this is how to do it:
source /opt/python/run/venv/bin/activate
source /opt/python/current/env
python manage.py <your_command>
And you have to do that every time you ssh into the machine.
Note: the reason you're getting the permission denied error is that when you pipe the output of dumpdata to project_dump.json, you're trying to write in the app directory itself. Not a good idea. Try piping to > ~/project_dump.json instead (your home directory), then sudo won't be needed.

Starting TRAC server with multiple independant projects

I'm running a TRAC server (tracd service) with 3 independant projects configured. Each project has an own password file in order to keep the user management independant. TRAC is started as a Windows service as described on https://trac.edgewall.org/wiki/0.11/TracStandalone
It seems that starting the TRAC server does not work if the string length of the key 'AppParameters' in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\tracd\Parameters is too long. The maximum key lenght seems to be around 260 characters.
The TRAC server can be started successfully using following 'AppParameters' key:
C:\Python27\Scripts\tracd-script.py -p 80 --auth=',C:\Trac\Moisture\conf\.htpasswd,mt.com' --auth=',C:\Trac\Balances\conf\.htpasswd,mt.com' --auth=',C:\Trac\Weights\conf\.htpasswd,mt.com' C:\Trac\Moisture C:\Trac\Balances C:\Trac\Weights
The TRAC server does not start with following 'AppParameters' key:
C:\Python27\Scripts\tracd-script.py -p 80 --auth='Moisture,C:\Trac\Moisture\conf\.htpasswd,mt.com' --auth='Balances,C:\Trac\Balances\conf\.htpasswd,mt.com' --auth='Weights,C:\Trac\Weights\conf\.htpasswd,mt.com' C:\Trac\Moisture C:\Trac\Balances C:\Trac\Weights
If I add a fourth project it is not possible to start the TRAC server anymore because the string is too long. Is this problem known? Is there a workaround?
You can also shorten your command by using the -e option for specifying the Trac environment parent directory rather than explicitly listing each Environment path.
A more extensive solution:
You could run the service with nssm.
Install nssm and put it on your path. I installed using chocolatey package manager: choco install -y nssm.
Create a batch file, run_tracd.bat:
C:\Python27-x86\Scripts\tracd.exe -p 8080 env1
Run nssm install tracd:
Run nssm start tracd
You don't have to do it exactly like this. You could avoid the bat file and enter the parameters in the nssm GUI. I'm not Windows expert, but I like having the bat file because it's easier to edit. However, there may be security concerns that I'm unaware of or it may be more robust to put the parameters in the nssm GUI (you don't have to worry about accidental deletion of the bat file). The following also works for me:

Python script gives error when running as a ubuntu service

I have set up a service, and when I run it, I get the following error:
ImportError: No module named httplib2
I have httplib2 installed with pip and
my systemd ExecStart command is like this:
ExecStart=/usr/bin/python /home/orionas/Desktop/quickstart.py
The same script runs perfect from command line.
Hmm, I think you probably have installed httplib2 under your user but systemd uses another user to run the quickstart script.
Under [Service] include a line "User=" The python script will then inherit the permissions and paths of that user AFAIK.
Note: It is probably Not recommended to run a systemd service with userid the same as yours. Potential security risk. Another possible solution would be to run the python script within a [virtualenv] http://docs.python-guide.org/en/latest/dev/virtualenvs/ Many people do this and as far as I know it's the recommended practise

Categories