How to run rabbitmqctl commands using python subprocess? - python

Running rabbitmqctl from a Python package using subprocess returns "command not found".
proc = subprocess.Popen(['/path/to/rabbitmqctl', 'arguments'], stdout=subprocess.PIPE)
output = proc.communicate()[0]
rt = proc.returncode
The above code is part of a python project that will be packaged to a wheel distribution. After installing the wheel through pip, the above code returns an exit code 127 which is "command not found".
I tried with the full path to rabbitmqctl, used sudo with the command, used preexec_fn in subprocess and set the uid to rabbitmq user but everything returns returncode 127.
The command executes fine in the python interpreter. Issue is only when the code is installed as a package.
This code is part of a flask app which is controlled by gunicorn. I've even tried to start gunicorn with sudo, but ended up getting the same error.

The issue was due to the python virtual environment.
I installed the package that has the rabbitmqctl command in a python virtual environment. So even though the module had root privileges, it is not able to find rabbitmqctl command because the path to that binary was not part of the PATH environment variable of the virtual environment. I fixed it by adding the env parameter in subprocess.
rabbit_env = os.environ.copy()
rabbit_env['PATH'] = '/path/where/rabbitmqctl/is/located/:' + rabbit_env['PATH']
proc = subprocess.Popen(['/path/to/rabbitmqctl', 'arguments'], env=rabbit_env, stdout=subprocess.PIPE)
output = proc.communicate()[0]
rt = proc.returncode
The reason why I got exit code 127 even when I specified the full path of rabbitmqctl is because rabbitmqctl is a script that runs some other commands and rabbitmqctl was not able to find those dependent commands in the PATH because those commands' locations are not part of the virtual environment PATH. So make sure you add the locations of all the rabbitmqctl dependent commands in the rabbit_env['PATH'] above.

Related

mkdir command not found with Python 3 on Debian

I'm sure this is something simple, but I'm trying several settings and I just can't seem to get this to work.
I have the following code:
import subprocess
p = subprocess.Popen('mkdir -p /backups/my_folder', stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
This is running in a flask application on nginx and python 3
When this executes I'm getting the following error:
/bin/sh: 1: mkdir: not found
I've tried with shell=False, I've tried with Popen(['mkdir', ...]), and I've tried subprocess.run like this question/answer
If I run with shell=False, I get the following error:
Error: [Errno 2] No such file or directory: 'mkdir -p
/backups/my_folder': 'mkdir -p /backups/my_folder'
When I do /bin/mkdir, it works. But, there are other commands which call sub commands that fail (tar calling gz for instance)
What am I missing to get this to work?
Running:
Debian 9.8, Nginx 1.14.0, Python 3.6.8
EDIT
I need this to work for other commands as well. I know I can use os.makedirs, but I have several different commands I will be executing (rsync, ssh, tar, and more)
For these simple commands, try to use python instead of invoking the shell - it makes you more independent of the environment:
os.makedirs('/backups/my_folder', exist_ok=True)
I found the problem.
I realized that my /etc/systemd/system/site.service uWSGI settings had a hard coded path:
Environment = /usr/local/bin
Once, I changed this to include /bin, all my subprocess commands executed just fine.
import subprocess
p = subprocess.Popen('mkdir -p my_folder', stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
(result, error) = p.communicate()
print(result)
this is for only windows 10.

python: subprocess.Popen, openvpn command not found

OS X 10.13.6 Python 3.6
I am trying to run the following command from a jupyter notebook:
vpn_cmd = '''
sudo openvpn
--config ~/Downloads/configs/ipvanish-US-Chicago-chi-a49.ovpn
--ca ~/Downloads/configs/ca.ipvanish.com.crt'''
proc = Popen(vpn_cmd.split(), stdout=PIPE, stderr=STDOUT)
stdout, stderr = proc.communicate()
print(stdout.decode())
But get the error:
sudo: openvpn: command not found
What I've tried:
added export PATH="/usr/local/sbin:$PATH" to my ~/.bash_profile and can run the the sudo openvpn command from my terminal
edited my sudoers file so sudo no longer prompts for a password
called sudo which openvpn and tried adding /usr/local/sbin/openvpn to my sys.path within python
not splitting vpn_cmd and setting shell=True
tried packaging it in a test.py script and executing from the terminal, but it just hangs at the proc.communicate() line
specified the full path for the --config and --ca flags
So far, nothing has fixed this. I can run openvpn from my terminal just fine. It seems like a simple path issue but I can't figure out what I need to add to my python path. Is there something particular with the jupyter notebook kernel?
Jupyter probably isn't picking up your personal .bashrc settings, depending also on how you are running it. Just hardcode the path or augment the PATH in your Python script instead.
With shell=False you don't get the tildes expanded; so you should change those to os.environ["HOME"], or make sure you know in which directory you run this, and use relative paths.
You should not be using Popen() if run can do what you require.
home = os.environ["HOME"]
r = subprocess.run(
['sudo', '/usr/local/sbin/openvpn',
'--config', home + '/Downloads/configs/ipvanish-US-Chicago-chi-a49.ovpn',
'--ca', home + '/Downloads/configs/ca.ipvanish.com.crt'],
stdout=PIPE, stderr=PIPE, universal_newlines=True)
print(r.stdout)

"sh: sysctl Command not Found " for Mac OS X running a cron job

I have a python script, script.py, and am using cron to run this script periodically. The script runs as expected, but once the cron job finishes, I get the following error in /var/mail/[myusername]:
sh: sysctl Command Not Found
The following is the cron job:
0 14 * * * PATH=$PATH:/usr/sbin PYTHONPATH=/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ /usr/bin/python2.7 ~/.../script.py
I was told to include both PATH and PYTHONPATH in the task (as before, python wouldn't recognize several modules I had imported and had installed), so at this point, I'm not sure what the problem could be
On some Macs, sysctl is located in /sbin/ instead of /usr/sbin/. You should add /sbin to your PATH variable

how do you install requirements to arbitrary virtualenv in python scripts?

I am trying to install requirements for each project in a list automatically into its own virtualenv. I have gotten to the point of making the virtualenv correctly, but I cannot get it to activate and install requirements into only that virtualenv:
#!/usr/bin/env python
import subprocess, sys, time, os
HOMEPATH = os.path.expanduser('~')
CWD = os.getcwd()
d = {'cwd': ''}
if len(sys.argv) == 2:
projects = sys.argv[1:]
def call_sp(command, **arg_list):
p = subprocess.Popen(command, shell=True, **arg_list)
p.communicate()
def my_makedirs(path):
if not path.startswith('/home/cchilders'):
path = os.path.join(HOMEPATH, path)
try: os.makedirs(path)
except: pass
for project in projects:
path = os.path.join(CWD, project)
my_makedirs(path)
git_string = 'git clone git#bitbucket.org:codyc54321/{}.git {}'.format(project, d['cwd'])
call_sp(git_string)
d = {'executable': 'bash'}
call_sp("""source /usr/local/bin/virtualenvwrapper.sh && mkvirtualenv --no-site-packages {}""".format(project), **d)
# call_sp("""source /usr/local/bin/virtualenvwrapper.sh && workon {}""".format(project), **d)
# below, the dot (.) means the same as 'source'. the dot doesn't error, calling source does
call_sp('. /home/cchilders/.virtualenvs/{}/bin/activate'.format(project))
d = {'cwd': path}
call_sp("pip install -r requirements.txt", **d)
It works up to
call_sp("""source /usr/local/bin/virtualenvwrapper.sh && mkvirtualenv --no-site-packages {}""".format(project), **d)
but when the script ends, I am not active in the venv and the venv does not have any packages from requirements. Both efforts to source the venv (the one commented out and live) both fail.
The answer that helped me get the mkvirtualenv to work is subprocess.Popen: mkvirtualenv not found.
I also noticed I have a need to do more than just pip install, in one case I need to run 'python setup.py mycommand' which automates setup for each project. How can run commands as if a virtualenv is activated and also install dependencies to arbitrary venvs in a python script?
The only way I've found around this is turning the virtualenv on by hand, then calling my python script by hand. I was surprised, turning it on by bash worked, but calling the python script bombed (maybe because it's a different process than the bash one)
Thank you
This is because each call_sp call creates a new shell, so after the first call to call_sp ends all the settings created by sourcing of virtualenvwrapper are gone. You have to combine all your commands into the single call_sp chain. Otherwise you can just start shell using 'Popen' and feed commands to it using communicate.
If you go with the later you need to be careful with synchronizing and detecting when installation of requirements ends. Pip can take a long time downloading and installing packages with complex dependencies.
This is the way I have done this kind of bootstrapping for virtual environments. Let the script take care of it's own env and just run the script. Running this app.py will setup its VE and modules if missing.
./requirements.txt file
flask
./app.py script
#!/bin/bash
""":"
VENV=$(realpath -s $(dirname $0)/ve)
PYTHON=$VENV/bin/python
if [ ! -f "$PYTHON" ]; then
echo "installing env app"
python3 -m venv $VENV
${VENV}/bin/pip install -r $(dirname $0)/requirements.txt
fi
exec $PYTHON $0 $#
"""
import flask
print("I am Python with flask", flask)
No matter what dir we are in, app.py bootstrapps though the bash script header, installing a ve if python does not exist, running pip, and whatever else you need. Then exec $PYTHON $0 $# is a slick way to swap out bash process for the python process keeping the same pid.
When python takes over, it skips over the bash part because that script is in triple quotes string. So the first line python executes is import flask (well it discards the bash script string 1st). Another cool thing is the pid of the bash process is the same as the pid of the python process. So any daemon utility that babysits this will still see the pid it started.
The last trick in this is that bash needs one extra quote to balance its string """:" at the top. Python does not care about that extra quote
I hope you see the pattern. To upgrade modules in requirements.txt, just rm the ve and run the app again. Simple.

Opening a terminal application from python and running custom scripts inside it

I'm working with a software called dc_shell that has a terminal command (also called dc_shell) on a CentOS Linux server. when I run dc_shell command, I'm connected to its terminal and I'm able to run scripts/commands inside it. (This is all done manually)
So the real problem is that I want to do this task all from a Python program. Meaning that I have a Python code which does some task, and after that has to open dc_shell and run some commands inside it.
I have used subprocess.Popen before and this doesn't have any problem when I run commands like ls or other general terminal commands. But when I run dc_shell command it seems like it crashes and nothing happens, and when I try to terminate the session I get the following errors in my terminal.
Here's my code:
def run_scripts():
commandtext = 'cd ..; dc_shell-xg-t; set_app_var link_library "slow.db"; set_app_var target_library "slow.db"; set_app_var symbol_library "tsmc18.sdb";'
print(commandtext)
process = subprocess.Popen(commandtext,stdout=subprocess.PIPE, shell=True)
proc_stdout = process.communicate()[0].strip()
print(proc_stdout)
and the output is:
cd ..; dc_shell-xg-t; set_app_var link_library "slow.db"; set_app_var target_library "slow.db"; set_app_var symbol_library "tsmc18.sdb";
and nothing happens... and after terminating I get:
[User#server python]$ /bin/sh: set_app_var: command not found
/bin/sh: set_app_var: command not found
/bin/sh: set_app_var: command not found
Do you need to use dc_shell to run your commands?
If so, that should be your executable and the rest of commands your arguments.
You should never use shell=True due to security considerations (the warning in the 2.x docs for subprocess seems much clearer to me).

Categories