I am currently working with pxssh for Python and it's working like a charm, however it seems like it doesn't process piped (|) commands - it interprets them as separate ones.
At the moment I have the following:
s = pxssh.pxssh()
s.login(sshHost, sshUsername, sshPassword)
s.sendline("cat somefile | grep something | cut -d \' -f 4")
It works properly with any commands that are not piped, however I need to send a piped one at the moment.
Is there a way around this with pxssh, or can you please suggest a way to implement another solution for this command?
It's not clear to me why pxssh would behave as you describe. Are you sure the issue is not that your \' is interpreted by Python, whereas you want it to be interpreted by the remote shell? That would be better spelled like so:
s.sendline("cat somefile | grep something | cut -d \\' -f 4")
You certainly do have alternatives. One would be to use a single command instead of a pipeline, such as:
s.sendline("sed -n '/something/ { s/\\([^,]*,\\)\\{3\\}\\([^,]*\\),.*/\\2/; p }'")
As a special case of that, you could launch a subshell in which to run your pipeline:
s.sendline('''bash -c "cat somefile | grep something | cut -d \\' -f 4"''')
Related
I am trying to write a bash script to use in python code.
Multi-line bash command (this works perfectly when run directly from terminal)
mydatefile="/var/tmp/date"
while IFS= read line
do
echo $line
sh /opt/setup/Script/EPS.sh $(echo $line) | grep "WORD" | awk -F ',' '{print $6}'
sleep 1
done <"$mydatefile"
My single line conversion
mydatefile="/var/tmp/date;" while IFS= read line do echo $line; sh /opt/setup/Script/EPS.sh $(echo $line) | grep "WORD" | awk -F ',' '{print $6}'; sleep 1; done <"$mydatefile";
ERROR
-bash: syntax error near unexpected token `done'
Missing a ; (fatal syntax error):
while IFS= read line; do echo ...
# ^
# here
More in depth :
combined grep+awk in a single command
mydatefile="/var/tmp/date"
while IFS= read line; do
echo "$line"
sh /opt/setup/Script/EPS.sh "$line" |
awk -F ',' '/WORD/{print $6}'
sleep 1
done < "$mydatefile"
use more quotes !
Learn how to quote properly in shell, it's very important :
"Double quote" every literal that contains spaces/metacharacters and every expansion: "$var", "$(command "$var")", "${array[#]}", "a & b". Use 'single quotes' for code or literal $'s: 'Costs $5 US', ssh host 'echo "$HOSTNAME"'. See
http://mywiki.wooledge.org/Quotes
http://mywiki.wooledge.org/Arguments
http://wiki.bash-hackers.org/syntax/words
finally:
mydatefile="/var/tmp/date;" while IFS= read line; do echo $line; sh /opt/setup/Script/EPS.sh "$line" | awk -F ',' '/WORD/{print $6}'; sleep 1; done < "$mydatefile";
One way to do this conversion might be to paste the script onto the command-line, then look up in the history - though this might depend on the version of command-line editor you have. Note that you do need a semicolon before do, but NOT after. You are punished for too many semicolons as well as too few.
Another way would be to line-by-line fold each line in your script and keep testing it.
The binary chop approach is do the first half, test and undo or continue.
Once you have it down to 1 line that works you can pasted it into python.
Only recently started to learn Python, so please excuse the noob question (might be doing this completely wrong to begin with)
I'm trying to get user input to use with os.system
import os
account = input("Account name: ")
os.system("op get item + account + | awk -F':' '{print $24}' | cut -d '}' -f 1")
this line of code is used to take and display the information from 1Password and outputs just
the password of Facebook is this example, it works but as soon as i try to add the variable it breaks
op get item Facebook | awk -F':' '{print $24}' | cut -d '}' -f 1
I hope someone here can help me out or point me in the right direction, thanks!
I think that you need to work with the output of these commands, so you can use the subprocess module instead: Replacing Shell Pipeline
For example:
from subprocess import check_output
check_output(f"op get item {account} | awk -F':' '{{print $24}}' | cut -d '}}' -f 1", shell=True)
Be aware that you would like to sanitize the account variable, before touching your system calls. :)
You just need string substitution, e.g. with Python3.6+ the elegant
os.system(f"op get item {account} | awk -F':'" + " '{print $24}' | cut -d '}' -f 1")
I have a shell command which parses a certain content and gives the required output. I need to implement this in python but the shell command has a new line character "\n" which is not getting getting executed when run through python command.
Of the many lines in the output log, the required line looks like - configurationFile=/app/log/conf/the_jvm_name.4021.logback.xml
I would only need the_jvm_name from the above. The syntax will always be the same. The shell command works fine.
Shell Command -
ps -ef | grep 12345 | tr " " "\n" | grep logback.configurationFile | awk -F"/" '{print $NF}'| cut -d. -f1
Python (escaped all the required double quotes) -
import subprocess
pid_arr = "12345"
sh_command = "ps -ef | grep "+pid_arr+" | tr \" \" \"\n\" | grep configurationFile | awk -F \"/\" '{print $NF}' | cut -d. -f1"
outpt = subprocess.Popen(sh_command , shell=True,stdout=subprocess.PIPE).communicate()[0].decode('utf-8').strip()
With python, I'm not getting the desired output. It just prints configurationFile as it is in the command.
what am I missing here. Any other better way for getting this details?
You can achieve what you want using a regex substitution in Python:
output = subprocess.check_output(["ps", "-ef"])
for line in output.splitlines():
if re.search("12345", line):
output = re.sub(r".*configurationFile=.*/([^.]+).*", r"\1", line)
This captures the part after the last / in the configuration file path, up to the next ..
You could make it slightly more robust by checking only the second column (the PID) for 12345, either by splitting each line on white space:
cols = re.split("\s+", line)
if len(cols) > 1 and cols[1] == "12345":
or by using a better regex, like:
if re.match(r"\S+\s+12345\s", line):
Note that you could also shorten your pipe considerable by just doing something like:
ps -ef | sed -nE '/12345/ { s/.*configurationFile=.*\/([^.]*).*/\1/; p }'
Your shell command works, but it has to deal with too many lines of output and too many fields per line. An easier solution is to tell the ps command to just give you 1 line and on that line, just one field that you care about. For example, on my system:
ps -o cmd h 979
will output:
/usr/bin/dbus-daemon --config-file=/usr/share/defaults/at-spi2/accessibility.conf --nofork --print-address 3
The -o cmd flag will output only the CMD column of the output, while the h parameter represents a command to tell ps to omit the header. Finally, the 979 is the process ID, which tells ps to output information just for this process.
This output is not exactly what you have in your problem, but similar enough. Once we limited the output, we eliminate the need for other commands such as grep, awk, ... At this point, we can use regular expression to extract what we want:
from __future__ import print_function
import re
import subprocess
pid = '979'
command = ['ps', '-o', 'cmd', 'h', pid]
output = subprocess.check_output(command)
pattern = re.compile(r"""
config-file= # Literal string search
.+\/ # Everything up to the last forward slash
([^.]+) # Non-dot chars, this is what we want
""", re.VERBOSE)
matched = pattern.search(output)
if matched:
print(matched.group(1))
Notes
For the regular expression, I am using a verbose form, allowing me to use comment to annotate my pattern. I like this way as regular expression can be difficult to read
On your system, please adjust the "configuration-file" part to work with your output.
seems like I can't use apostrophe, the command fails with no informative error.
I'm trying to execute the following:
secretLsCmd = subprocess.Popen(('docker', 'secret', 'ls') , stdout=subprocess.PIPE)
oneWhitespaceCmd = subprocess.Popen(('tr', '-s','" "') , stdout=subprocess.PIPE, stdin=secretLsCmd.stdout)
onlySecretsCmd = subprocess.check_output(('cut', "-d' '", '-f2') , stdin=oneWhitespaceCmd.stdout)
in a normal Linux terminal, it would execute the following command:
docker secret ls | tr -s " " | cut -d' ' -f2
Running this command in CLI works fine, but once I put it in python it isn't working. The 2 first commands in the pipe are working fine (i have checked), the last command is not working, exiting with error code 1... I'm almost 100% sure it is the -d' ' , but how can I fix that? any idea?
This line:
oneWhitespaceCmd = subprocess.Popen(('tr', '-s','" "'), ...)
is actually running:
tr -s '" "'
so you want to lose the extra double quotes there: Python will quote any arguments that it need to quote for the shell.
This command:
onlySecretsCmd = subprocess.check_output(('cut', "-d' '", '-f2'), ...)
is equivalent to the shell command:
cut '-d'"'"' '"'"'' -f2
so again, you probably just want to lose the quotes round the whitespace:
onlySecretsCmd = subprocess.check_output(('cut', "-d ", '-f2'), ...)
and leave it to Python to insert quotes where required. This will actually run (what should be equivalent to what you want though not identical):
cut '-d ' -f2
I used shlex.quote() to create a shell equivalent to the commands you are running, though in practice unless you tell subprocess to invoke a shell it will just be executing the equivalent of the command bash would execute after parsing all the escape quote marks. So internally the escaping isn't happening but the quotes to distinguish the arguments aren't needed either.
So I have this shell script:
echo "Enter text to be classified, hit return to run classification."
read text
if [ `echo "$text" | sed -r 's/ +/ /g' | bin/stupidfilter data/c_rbf` = "1.000000" ]
then
echo "Text is not likely to be stupid."
fi
if [ `echo "$text" | sed -r 's/ +/ /g' | bin/stupidfilter data/c_rbf` = "0.000000" ]
then
echo "Text is likely to be stupid."
fi
I would like to write it in python. How do I do this?
(As you can see it uses the library http://stupidfilter.org/stupidfilter-0.2-1.tar.gz)
To do it just like the shell script does:
import subprocess
text = raw_input("Enter text to be classified: ")
p1 = subprocess.Popen('bin/stupidfilter', 'data/c_trbf')
stupid = float(p1.communicate(text)[0])
if stupid:
print "Text is likely to be stupid"
else:
print "Text is not likely to be stupid"
You could clearly run the commands as sub-shells and read the return value, just as in the shell script, and then process the result in Python.
This is simpler than loading C functions.
If you really want to load a function from the stupidfilter library, then first look to see whether someone else has already done it. If you cannot find anyone who has, then read the manual - how to call from Python onto C is covered in there.
It is still simpler to use what others have already done.