Unexpected behavior from Popen once web app is deployed with apache - python

I have some code that uses subprocess to look at the logs from a git directory. My code seems to work fine when executed in a local django dev environment. Once deployed however (with Apache / mode_wsgi) the output from stdout read() comes back empty. My development and production machine are the same right now, and I also tried making sure every file was readable.
Does anybody have an idea why Popen is not returning any output once deployed here? Thanks.
def getGitLogs(projectName, searchTerm=None, since):
os.chdir(os.path.join(settings.SCM_GIT, projectName))
cmd = "git log --since {0} -p".format(since)
p = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, close_fds=True)
output = p.stdout.read()
### here output comes back as expected in dev environment, but empty in once deployed
return filterCommits(parseCommits(output), searchTerm)

Chain your chdir as part of your command (ie, cd /foo/bar/zoo)
Pass the full path to git
So your command would end up cd /foo/bar/zoo && /usr/bin/git log --since

Related

Why doesn't my input from subprocessing module go through netcat spawned /bin/bash after I make it a bit stable with pty?

Consider this python script:
import subprocess
nc = subprocess.Popen(["/bin/bash"], stdin=subprocess.PIPE, text=True)
nc.stdin.write("nc localhost 2222\n")
nc.stdin.write("pwd\n")
When I listen with netcat as nc -lnvp 2222
I successfully connect and send the string pwd nothing more happens of course.
Now I get a non stable php reverse shell(Completely new event) and I connect through netcat successfully. I execute this script to upgrade shell and print current directory. By the way that listener is another Popen instance.
import subprocess
nc = subprocess.Popen(["/bin/bash"], stdin=subprocess.PIPE, text=True)
nc.stdin.write("nc localhost 2222\n")
nc.stdin.write('python3 -c "import pty;pty.spawn(\'/bin/bash\')"\n')
nc.stdin.write('pwd\n')
Now when I execute that python script, I expected the input will go through netcat, get executed in that new bash tty and spawn a stable shell and pass pwd to return current directory. But this script only works upto spawing stable shell and then stdin input doesn't go through nc or something else happens that I'm not aware of.
What's happening here?
Edit: I need to be able to run multiple commands. Using subprocess.communicate(input=<command>) causes deadlock and can't accept stdin.

mkdir command not found with Python 3 on Debian

I'm sure this is something simple, but I'm trying several settings and I just can't seem to get this to work.
I have the following code:
import subprocess
p = subprocess.Popen('mkdir -p /backups/my_folder', stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
This is running in a flask application on nginx and python 3
When this executes I'm getting the following error:
/bin/sh: 1: mkdir: not found
I've tried with shell=False, I've tried with Popen(['mkdir', ...]), and I've tried subprocess.run like this question/answer
If I run with shell=False, I get the following error:
Error: [Errno 2] No such file or directory: 'mkdir -p
/backups/my_folder': 'mkdir -p /backups/my_folder'
When I do /bin/mkdir, it works. But, there are other commands which call sub commands that fail (tar calling gz for instance)
What am I missing to get this to work?
Running:
Debian 9.8, Nginx 1.14.0, Python 3.6.8
EDIT
I need this to work for other commands as well. I know I can use os.makedirs, but I have several different commands I will be executing (rsync, ssh, tar, and more)
For these simple commands, try to use python instead of invoking the shell - it makes you more independent of the environment:
os.makedirs('/backups/my_folder', exist_ok=True)
I found the problem.
I realized that my /etc/systemd/system/site.service uWSGI settings had a hard coded path:
Environment = /usr/local/bin
Once, I changed this to include /bin, all my subprocess commands executed just fine.
import subprocess
p = subprocess.Popen('mkdir -p my_folder', stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
(result, error) = p.communicate()
print(result)
this is for only windows 10.

python: subprocess.Popen, openvpn command not found

OS X 10.13.6 Python 3.6
I am trying to run the following command from a jupyter notebook:
vpn_cmd = '''
sudo openvpn
--config ~/Downloads/configs/ipvanish-US-Chicago-chi-a49.ovpn
--ca ~/Downloads/configs/ca.ipvanish.com.crt'''
proc = Popen(vpn_cmd.split(), stdout=PIPE, stderr=STDOUT)
stdout, stderr = proc.communicate()
print(stdout.decode())
But get the error:
sudo: openvpn: command not found
What I've tried:
added export PATH="/usr/local/sbin:$PATH" to my ~/.bash_profile and can run the the sudo openvpn command from my terminal
edited my sudoers file so sudo no longer prompts for a password
called sudo which openvpn and tried adding /usr/local/sbin/openvpn to my sys.path within python
not splitting vpn_cmd and setting shell=True
tried packaging it in a test.py script and executing from the terminal, but it just hangs at the proc.communicate() line
specified the full path for the --config and --ca flags
So far, nothing has fixed this. I can run openvpn from my terminal just fine. It seems like a simple path issue but I can't figure out what I need to add to my python path. Is there something particular with the jupyter notebook kernel?
Jupyter probably isn't picking up your personal .bashrc settings, depending also on how you are running it. Just hardcode the path or augment the PATH in your Python script instead.
With shell=False you don't get the tildes expanded; so you should change those to os.environ["HOME"], or make sure you know in which directory you run this, and use relative paths.
You should not be using Popen() if run can do what you require.
home = os.environ["HOME"]
r = subprocess.run(
['sudo', '/usr/local/sbin/openvpn',
'--config', home + '/Downloads/configs/ipvanish-US-Chicago-chi-a49.ovpn',
'--ca', home + '/Downloads/configs/ca.ipvanish.com.crt'],
stdout=PIPE, stderr=PIPE, universal_newlines=True)
print(r.stdout)

executing commands in containers from within a container using docker-compose up vs docker-compose run

I'll try to explain this as simply as possible.
I have a dockerised python app. Within this python app at some point I try to run a docker command in another (libreoffice) container as such:
import subprocess
file_path = 'path_to_file'
args = ['docker', 'run', '-it', '-v', '/tmp:/tmp',
'lcrea/libreoffice-headless', '--headless', '--convert-to', 'pdf', file_path,
'--outdir', '/tmp']
process = subprocess.run(args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
timeout=timeout)
I end my python app's Dockerfile with a command which starts the server:
CMD python3 -m app.run_app
What is interesting is when I start the python app like this it works fine:
docker-compose run -p 9090:9090 backend /bin/bash
root#74430c3f1f0c:/src python3 -m app.run_app
But when I start it just using docker-compose up, the libreoffice container is never called. I am sure of it because when I do docker ps -a in the first instance a libreoffice container has been created while in the second there is none.
What is going on here?
I found the error. I was passing in the -it option which was failing the process because of the input device is not a TTY. All I had to do was take it out...

python subprocess output on nohup

Trying to monitor the available physical disc space of a remote machine using a python script, which executes the df -h . command using subprocess.popen.
import subprocess
import time
command = 'ssh remoteserver "df -h ."'
while True:
proc = subprocess.Popen(command,shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output,err=proc.communicate()
print output
print err
time.sleep(60)
The script runs fine and prints the output to the terminal when run from command line
$> python2.7 script.py
Filesystem Size Used Avail Use% Mounted on
remoteserver:/home/user
555G 447G 109G 81% /home
The scripts does not produce any output or seems to be blocking when the script is started with nohup command.
$> nohup python2.7 script.py &
Would like the script to work and fetch the disc space of remote machine using the above script when started in nohup.
I'm not 100% sure of the underlying issue here, but when you invoke NOHUP in the shell, it's disconnected some of the STDIN/STDOUT from the terminal process, which I suspect it causing some of this interactions you're seeing.
Given that you're doing this from a remote machine, I'd actually recommend you look at using something like Fabric as a library to do what you're after. It's pretty straightforward, and does most of the handling of terminal sessions as well as closing things down nicely for you when you're complete.
something like:
from fabric import api
from fabric.api import env
import fabric
env.host_string = '%s#%s' % (username, remote_host)
env.disable_known_hosts = True
env.password = password
fabric.state.output['stdout'] = False
fabric.state.output['stderr'] = False
results = api.run('df -h')
You might try sending stdin=subprocess.PIPE to the subprocess command, then calling proc.stdin.close() on the next line, before the communicate() call. Or you can try changing the command to 'ssh remoteserver "df -h ." </dev/null'. Others report using FNULL = open(os.devnull, 'r') and passing in FNULL to the stdin= argument, but I'm not sure if you need to call FNULL.close() after or not.
SSH is most likely waiting for input for some reason when it is run from nohup. Perhaps it is unable to authenticate in the nohup environment and is asking for password input?
To make sure SSH is not waiting for input, try adding -o "BatchMode yes" to the ssh command and see if there are some clues in the output/error from the subprocess communicate call.

Categories