mkdir command not found with Python 3 on Debian - python

I'm sure this is something simple, but I'm trying several settings and I just can't seem to get this to work.
I have the following code:
import subprocess
p = subprocess.Popen('mkdir -p /backups/my_folder', stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
This is running in a flask application on nginx and python 3
When this executes I'm getting the following error:
/bin/sh: 1: mkdir: not found
I've tried with shell=False, I've tried with Popen(['mkdir', ...]), and I've tried subprocess.run like this question/answer
If I run with shell=False, I get the following error:
Error: [Errno 2] No such file or directory: 'mkdir -p
/backups/my_folder': 'mkdir -p /backups/my_folder'
When I do /bin/mkdir, it works. But, there are other commands which call sub commands that fail (tar calling gz for instance)
What am I missing to get this to work?
Running:
Debian 9.8, Nginx 1.14.0, Python 3.6.8
EDIT
I need this to work for other commands as well. I know I can use os.makedirs, but I have several different commands I will be executing (rsync, ssh, tar, and more)

For these simple commands, try to use python instead of invoking the shell - it makes you more independent of the environment:
os.makedirs('/backups/my_folder', exist_ok=True)

I found the problem.
I realized that my /etc/systemd/system/site.service uWSGI settings had a hard coded path:
Environment = /usr/local/bin
Once, I changed this to include /bin, all my subprocess commands executed just fine.

import subprocess
p = subprocess.Popen('mkdir -p my_folder', stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
(result, error) = p.communicate()
print(result)
this is for only windows 10.

Related

How to override ansible.cfg variable from a subprocess.call from a python script?

When I run the following command from the command line
ANSIBLE_DISPLAY_OK_HOSTS=true ansible-playbook -i my_inventory.yaml myplaybook.yaml --tag my_tag
then everything works fine, however if I try to do so from a python script using subprocess.call, it fails with "No such file or directory: 'ANSIBLE_DISPLAY_OK_HOSTS=true'
What is the difference and how to fix it please??
From within the python script I tried calling it by following ways:
1)
command = f"ANSIBLE_DISPLAY_OK_HOSTS=true ansible-playbook -i {inventory_path} {absolute_playbook_path} --tag {ansible_tag}" subprocess.run(command)
2)
command = ["ANSIBLE_DISPLAY_OK_HOSTS=true ansible-playbook", "-i", inventory_path, absolute_playbook_path, "--tag", ansible_tag] subprocess.run(command)
with no success.
You are trying to use shell syntax, but you're not executing your command with a shell. Use the env keyword of subprocess.run to provide environment variables to your command:
env = {"ANSIBLE_DISPLAY_OK_HOSTS": "true"}
command = [
"ansible-playbook",
"-i", inventory_path,
absolute_playbook_path,
"--tag", ansible_tag
]
subprocess.run(command, env=env)
You could make version 1 of your command work by specifying shell=True, like this:
command = f"ANSIBLE_DISPLAY_OK_HOSTS=true ansible-playbook -i {inventory_path} {absolute_playbook_path} --tag {ansible_tag}"
subprocess.run(command, shell=True)
But there's really no reason to involve a shell in this invocation.

FileNotFoundError: [Errno 2] No such file or directory: 'bash' when running gunicorn server from .service file

Getting FileNotFoundError: [Errno 2] No such file or directory: 'bash' error while running my gunicorn python app form .service file.
However running gunicorn command by itself(not from .service file) works fine.
gunicorn command to run the app
gunicorn -k geventwebsocket.gunicorn.workers.GeventWebSocketWorker -w 1 --bind <server_ip>:8080 wsgi
app.service file
[Service]
User=user
WorkingDirectory=/home/user/app
Environment="PATH=/home/user/app/app_venv/bin"
ExecStart=/home/user/app/app_venv/bin/gunicorn -k geventwebsocket.gunicorn.workers.GeventWebSocketWorker --workers 1 --bind <server_ip>:8080 wsgi
Python code that is generating the error
import subprocess
cmd = ['bash', 'script.sh' , args.get('arg')]
try:
process = subprocess.Popen(cmd,
cwd=/path/to/bash_script,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True)
while process.poll() is None:
output = process.stdout.readline()
if(output==''):
break
emit('tg_output', output)
except subprocess.CalledProcessError as error:
pass
You are explicitly setting
Environment="PATH=/home/user/app/app_venv/bin"
You need for PATH to contain all the directories of any external binaries you want to use (and in fact, there is no need really for it to contain the directory of your script, if you are running it by full path anyway; so the best solution is probably simply to remove this PATH assignment from the file altogether).
Your Bash script doesn't seem to need Python to run it, and the Python wrapper you created to run it seems to have bugs (in particular, the blanket except looks unnerving); perhaps a better solution would be to run a separate Bash process altogether.
IMO the bash command is not in the user PATH. It’s better to always use the full path of the bash command.
cmd = ['/bin/bash', 'script.sh' , args.get('arg')]
Use which bash to get the full path.

python: subprocess.Popen, openvpn command not found

OS X 10.13.6 Python 3.6
I am trying to run the following command from a jupyter notebook:
vpn_cmd = '''
sudo openvpn
--config ~/Downloads/configs/ipvanish-US-Chicago-chi-a49.ovpn
--ca ~/Downloads/configs/ca.ipvanish.com.crt'''
proc = Popen(vpn_cmd.split(), stdout=PIPE, stderr=STDOUT)
stdout, stderr = proc.communicate()
print(stdout.decode())
But get the error:
sudo: openvpn: command not found
What I've tried:
added export PATH="/usr/local/sbin:$PATH" to my ~/.bash_profile and can run the the sudo openvpn command from my terminal
edited my sudoers file so sudo no longer prompts for a password
called sudo which openvpn and tried adding /usr/local/sbin/openvpn to my sys.path within python
not splitting vpn_cmd and setting shell=True
tried packaging it in a test.py script and executing from the terminal, but it just hangs at the proc.communicate() line
specified the full path for the --config and --ca flags
So far, nothing has fixed this. I can run openvpn from my terminal just fine. It seems like a simple path issue but I can't figure out what I need to add to my python path. Is there something particular with the jupyter notebook kernel?
Jupyter probably isn't picking up your personal .bashrc settings, depending also on how you are running it. Just hardcode the path or augment the PATH in your Python script instead.
With shell=False you don't get the tildes expanded; so you should change those to os.environ["HOME"], or make sure you know in which directory you run this, and use relative paths.
You should not be using Popen() if run can do what you require.
home = os.environ["HOME"]
r = subprocess.run(
['sudo', '/usr/local/sbin/openvpn',
'--config', home + '/Downloads/configs/ipvanish-US-Chicago-chi-a49.ovpn',
'--ca', home + '/Downloads/configs/ca.ipvanish.com.crt'],
stdout=PIPE, stderr=PIPE, universal_newlines=True)
print(r.stdout)

Run sequential commands in Python with subprocess

hope you can help. I need, in my Python script, to run the software container Docker with a specific image (Fenics in my case) and then to pass him a command to execute a script.
I've tried with subprocess:
cmd1 = 'docker exec -ti -u fenics name_of_my_container /bin/bash -l'
cmd2 = 'python2 shared/script_to_be_executed.py'
process = subprocess.Popen(shlex.split(cmd1),
stdout=subprocess.PIPE,stdin=subprocess.PIPE, stderr =
subprocess.PIPE)
process.stdin.write(cmd2)
print(first_process.stdout.read())
But it doesn't do anything. Suggestions?
Drop the -it flags in your call do docker, you don't want them. Also, don't try to send the command to execute into the container via stdin, but just pass the command to run in your call do docker exec.
I don't have a container running, so I'll use docker run instead, but the code below should give you a clue:
import subprocess
cmd = 'docker run python:3.6.4-jessie python -c print("hello")'.split()
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
out, err = p.communicate()
print(out)
This will run python -c print("hello") in the container and capture the output, so the Python (3.6) script will itself print
b'hello\n'
It will also work in Python 2.7, I don't know which version you're using on the host machine :)
Regarding communicating with a subprocess, see the official docs subprocess.Popen.communicate. Since Python 3.5 there's also subprocess.run, which makes your life even easier.
HTH!
You can use subprocess to call Fenics as an application, section 4.4 here.
docker run --rm -v $(pwd):/home/fenics/shared -w /home/fenics/shared quay.io/fenicsproject/stable "python3 my-code.py"

Unexpected behavior from Popen once web app is deployed with apache

I have some code that uses subprocess to look at the logs from a git directory. My code seems to work fine when executed in a local django dev environment. Once deployed however (with Apache / mode_wsgi) the output from stdout read() comes back empty. My development and production machine are the same right now, and I also tried making sure every file was readable.
Does anybody have an idea why Popen is not returning any output once deployed here? Thanks.
def getGitLogs(projectName, searchTerm=None, since):
os.chdir(os.path.join(settings.SCM_GIT, projectName))
cmd = "git log --since {0} -p".format(since)
p = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, close_fds=True)
output = p.stdout.read()
### here output comes back as expected in dev environment, but empty in once deployed
return filterCommits(parseCommits(output), searchTerm)
Chain your chdir as part of your command (ie, cd /foo/bar/zoo)
Pass the full path to git
So your command would end up cd /foo/bar/zoo && /usr/bin/git log --since

Categories