Python3 Run Alias Bash Commands - python

I have the following code that works great to run the ls command. I have a bash alias that I use alias ll='ls -alFGh' is it possible to get python to run the bash command without python loading my bash_alias file, parsing, and then actually running the full command?
import subprocess
command = "ls" # the shell command
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=None, shell=True)
#Launch the shell command:
output = process.communicate()
print (output[0])
Trying with command = "ll" the output I get is:
/bin/sh: ll: command not found
b''

You cannot. When you run a python process it has no knowledge of a shell alias. There are simple ways of passing text from parent to child process (other than IPC), the command-line and through environment (i.e. exported) variables. Bash does not support exporting aliases.
From the man bash pages: For almost every purpose, aliases are superseded by shell functions.
Bash does support exporting functions, so I suggest you make your alias a simple function instead. That way it is exported from shell to python to shell. For example:
In the shell:
ll() { ls -l; }
export -f ll
In python:
import subprocess
command = "ll" # the shell command
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=None, shell=True)
output = process.communicate()
print(output[0].decode()) # Required if using Python 3
Since you are using the print() function I have assumed you are using python 3. In which case you need the .decode(), since a bytes object is returned.
With a bit of hackery it is possible to create and export shell functions from python as well.

Related

Running bash command with python subprocess

I have a command in my bash_profile such as id=12345 which I defined the following alias
alias obs="echo $id" since the id will chance over time.
Now what I want to do is call this alias in my python script for different purposes. My default shell is bash so I have tried the following based on the suggestions on the web
import subprocess
subprocess.call('obs', shell=True, executable='/bin/bash')
subprocess.call(['/bin/bash', '-i', '-c', obs])
subprocess.Popen('obs', shell=True,executable='/bin/bash')
subprocess.Popen(['/bin/bash', '-c','-i', obs])
However, none of them seems to work! What am I doing wrong!
.bash_profile is not read by Popen and friends.
Environment variables are available for your script, though (via os.environ).
You can use export in your Bash shell to export a value as an environment variable, or use env:
export MY_SPECIAL_VALUE=12345
python -c "import os; print(os.environ['MY_SPECIAL_VALUE'])"
# or
env MY_SPECIAL_VALUE=12345 python -c "import os; print(os.environ['MY_SPECIAL_VALUE'])"

Execute a shell command with Python

I'm trying to execute a shell command using Python's subprocess. This is how I do it:
pelican = 'pelican content -s /home/pelican/publishconf.pyt -D --ignore-cache'
subprocess.call(pelican, shell=True)
But the response is command not found. It doesn't have a problem when I write in my command line.
My question is how can I execute a shell command using python that behaves just like I would type it in?

Jenkins not printing output of python script in console

I have a python script(myscript.py) as follows:
#!/bin/python
import os
import optparse
import subprocess
import sys
sys.stdout.flush()
print("I can see this message on Jenkins console output")
cmd="sshpass -p 'xxx' ssh test#testmachine 'cmd /c cd C:\stage && test.bat'"
retval=subprocess.call(cmd,shell=True)
print retval
In jenkins, I have a job with execute shell as follows:
#!/bin/sh
./myscript.py
Problem:
Jenkins console shows only "I can see this message on Jenkins console output".
If there is any output from the subprocess call, it does not print it out on the console.
If I putty to Server A and run the same command (./myscript.py) on shell, I can see the output of subprocess call.
How can I print this output of subprocess call on Jenkins console?
FYI: As you can see from my command, the subprocess call is running a batch file on windows; Jenkins is running on Linux; There is ssh setup between the two machines..
Edit:
My test.bat looks like this:
echo off
RMDIR /S /Q C:\Test
IF %ERRORLEVEL% NEQ 0 (
ECHO Could not delete
EXIT /b %ERRORLEVEL%
)
if I run this batch file locally on windows server, it returns a 1 ( because am holding a file open in Test folder )
But when the python script calls this batch file using the subprocess call, all i get is a Zero for retval.
Why is this and how to fix this? If I can capture the correct retval, I can make the Jenkins job fail.
Edit 12/12:
Helllo!! Anybody! Somebody! Help!
I wonder if it has to do anything with stdout being buffered
Can you try setting PYTHONUNBUFFERED before running your command?
export PYTHONUNBUFFERED=true
In my Jenkins environment, executing python scripts with the unbuffered argument makes the output appear immediately. Like this:
python3 -u some_script.py
More information comes from the help menu (python3 --help):
-u : force the stdout and stderr streams to be unbuffered;
this option has no effect on stdin; also PYTHONUNBUFFERED=x
TL; DR
The fix is to use some conditional execution (the || operator) on rmdir to fix the errorlevel being returned.
Investigation
This was a corker of a bug, with quite a few twists and turns! We initially suspected that the stdout chain was broken somehow, so looked into that through explicit use of pipes in Popen and then removing sshpass from your command and so using the output from ssh directly.
However, that didn't do the trick, so we moved on to looking at the return code of the command. With sshpass removed, ssh should return the result of the command that was run. However, this was always 0 for you.
At this point, I found a known bug in Windows that rmdir (which is the same as rd) doesn't always set errorlevel correctly. The fix is to use some conditional execution (the || operator) on rmdir to fix up the errorlevel.
See batch: Exit code for "rd" is 0 on error as well for full details.
When you execute your script in a shell, Python sets your shell's STDOUT as the subprocess's STDOUT, so everything that gets executed gets printed to your terminal. I'm not sure why, but when you're executing in Jenkins the subprocess is not inheriting the shell's STDOUT so its output is not displayed.
In all likelihood, the best way to solve your problem will be to PIPE the STDOUT (and STDERR for good measure) and print it after the process ends. Also, if you exit with the exit code of your subprocess and the exit code is not 0, it will likely terminate your Jenkins job.
p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=True)
exit_code = p.wait() # wait for it to end
print('Got the following output from the script:\n', p.stdout.read().decode())
print('Got the following errors from the script:\n', p.stderr.read().decode())
print('Script returned exit code:', exit_code)
sys.exit(exit_code)

Executing a local shell function on a remote host over ssh using Python

My .profile defines a function
myps () {
ps -aef|egrep "a|b"|egrep -v "c\-"
}
I'd like to execute it from my python script
import subprocess
subprocess.call("ssh user#box \"$(typeset -f); myps\"", shell=True)
Getting an error back
bash: -c: line 0: syntax error near unexpected token `;'
bash: -c: line 0: `; myps'
Escaping ; results in
bash: ;: command not found
script='''
. ~/.profile # load local function definitions so typeset -f can emit them
ssh user#box ksh -s <<EOF
$(typeset -f)
myps
EOF
'''
import subprocess
subprocess.call(['ksh', '-c', script]) # no shell=True
There are a few pertinent items here:
The dotfile defining this function needs to be locally invoked before you run typeset -f to dump the function's definition over the wire. By default, a noninteractive shell does not run the majority of dotfiles (any specified by the ENV environment variable is an exception).
In the given example, this is served by the . ~/profile command within the script.
The shell needs to be one supporting typeset, so it has to be bash or ksh, not sh (as used by script=True by default), which may be provided by ash or dash, lacking this feature.
In the given example, this is served by passing ['ksh', '-c'] is the first two arguments to the argv array.
typeset needs to be run locally, so it can't be in an argv position other than the first with script=True. (To provide an example: subprocess.Popen(['''printf '%s\n' "$#"''', 'This is just literal data!', '$(touch /tmp/this-is-not-executed)'], shell=True) evaluates only printf '%s\n' "$#" as a shell script; This is just literal data! and $(touch /tmp/this-is-not-executed) are passed as literal data, so no file named /tmp/this-is-not-executed is created).
In the given example, this is mooted by not using script=True.
Explicitly invoking ksh -s (or bash -s, as appropriate) ensures that the shell evaluating your function definitions matches the shell you wrote those functions against, rather than passing them to sh -c, as would happen otherwise.
In the given example, this is served by ssh user#box ksh -s inside the script.
I ended up using this.
import subprocess
import sys
import re
HOST = "user#" + box
COMMAND = 'my long command with many many flags in single quotes'
ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
The original command was not interpreting the ; before myps properly. Using sh -c fixes that, but... ( please see Charles Duffy comments below ).
Using a combination of single/double quotes sometimes makes the syntax easier to read and less prone to mistakes. With that in mind, a safe way to run the command ( provided the functions in .profile are actually accessible in the shell started by the subprocess.Popen object ):
subprocess.call('ssh user#box "$(typeset -f); myps"', shell=True),
An alternative ( less safe ) method would be to use sh -c for the subshell command:
subprocess.call('ssh user#box "sh -c $(echo typeset -f); myps"', shell=True)
# myps is treated as a command
This seemingly returned the same result:
subprocess.call('ssh user#box "sh -c typeset -f; myps"', shell=True)
There are definitely alternative methods for accomplishing these type of tasks, however, this might give you an idea of what the issue was with the original command.

subprocess Popen in python with command that changes environment

I'm trying to run a python script from python using the subprocess module and executing a script sequentially.
I'm trying to do this in UNIX but before I launch python in a new shell I need to execute a command (ppack_gnu) that sets the environment for python (and prints some lines in the console).
The thing is that when I run this command from python subprocess the process hangs and waits for this command to finish whereas when I do it in the UNIX console it jumps to the next line automatically.
Examples below:
From UNIX:
[user1#1:~]$ ppack_gnu; echo 1
You appear to be in prefix already (SHELL=/opt/soft/cdtng/tools/ppack_gnu/3.2/bin/bash)
1
[user1#1:~]$
From PYTHON:
processes.append(Popen("ppack_gnu; echo 1", shell=True, stdin = subprocess.PIPE))
This will print Entering Gentoo Prefix /opt/soft/cdtng/tools/ppack_gnu/3.2 - run 'bash -l' to source full bash profiles
in the python console and then hang...
Popen() does not hang: it returns immediately while ppack_gnu may be still running in the background.
The fact that you see the shell prompt does not mean that the command has returned:
⟫ echo $$
9302 # current shell
⟫ bash
⟫ echo $$
12131 # child shell
⟫ exit
⟫ echo $$
9302 # current shell
($$ -- PID of the current shell)
Even in bash, you can't change environment variables of the parent shell (without gdb or similar hacks) that is why source command exists.
stdin=PIPE suggests that you want to pass commands to the shell started by ppack_gnu. Perhaps you need to add process.stdin.flush() after the corresponding process.stdin.write(b'command\n').

Categories