Here's a quick snippet of my code using pexpect:
child.expect('tc#')
child.sendline('ps -o args | grep lp_ | grep -v grep | sort -n')
child.expect('tc#')
print(child.before)
child.sendline('exit')
and then the output:
user#myhost:~/Python$ python tctest.py
tc-hostname:~$ ps -o args | grep lp_ | grep -v grep | sort -n
/usr/local/bin/lp_server -n 5964 -d /dev/usb/lp1
/usr/local/bin/lp_server -n 5965 -d /dev/usb/lp0
{lp_supervisor} /bin/sh /usr/local/lp/lp_supervisor /dev/usb/lp0 SERIAL#1 /var/run/lp/lp_pid/usb_lp0
{lp_supervisor} /bin/sh /usr/local/lp/lp_supervisor /dev/usb/lp1 SERIAL#2 /var/run/lp/lp_pid/usb_lp1
user#myhost:~$
There's 4 lines of output. The first two lines show with printer port the usb device is assigned to (EX: first line shows port 5964 is assigned to lp1)
The 3rd and 4th lines show which device serial number is assigned to which usb port. (EX: SERIAL#1 is assigned to lp0)
I need to somehow parse that output so I can do the following:
If SERIAL#1 is not assigned to 5964:
run some command
else:
do something else
If SERIAL#2 is not assigned to 5965:
run some command
else:
do something else
I'm not sure how to manipulate that output so I can get the desired variables. Any help is appreciated.
You can extract port and serial information from pexpect data using re.findall and do something like this
import re
data = child.before
ports = re.findall(r'lp_server -n (\d+)', data)
# ['5964', '5965']
serials = re.findall(r'(SERIAL#\d+)', data)
# ['SERIAL#1', 'SERIAL#2']
list(zip(ports, serials))
# [('5964', 'SERIAL#1'), ('5965', 'SERIAL#2')]
for serial, port in zip(ports, serials):
# Check if serial and port matches expectation
Another way of doing it is by using dictionaries to build relationships between device serial numbers and printer ports:
inString = """/usr/local/bin/lp_server -n 5964 -d /dev/usb/lp1
/usr/local/bin/lp_server -n 5965 -d /dev/usb/lp0
{lp_supervisor} /bin/sh /usr/local/lp/lp_supervisor /dev/usb/lp0 SERIAL#1 /var/run/lp/lp_pid/usb_lp0
{lp_supervisor} /bin/sh /usr/local/lp/lp_supervisor /dev/usb/lp1 SERIAL#2 /var/run/lp/lp_pid/usb_lp1"""
inString = inString.split("\n")
matches = dict()
serials = dict()
for i in range(len(inString[:2])):
lp = inString[i][-3:]
printerPort = int(inString[i].split("-n ")[1][:4])
matches.update({lp:printerPort})
for i in range(2,len(inString)):
t = inString[i].split(" ")
lp = t[3][-3:]
serial = t[4]
serials.update({serial:lp})
finalLookup = dict((k,matches[v]) for k,v in serials.items())
print(finalLookup)
Output:
{'SERIAL#1': 5965, 'SERIAL#2': 5964}
Then you can do:
if not finalLookup['SERIAL#1'] == 5964:
run some command
else:
do something else
if not finalLookup['SERIAL#2'] == 5965:
run some command
else:
do something else
Related
I'm having trouble passing Tshark as a command to Popen. In particular, when I add the capture filter, the program gets stuck.
command = ‘sudo tshark -i wlan1 -f “subtype probe-req” -n -N mnNtdv -Tfields -e wlan.ta -e wlan.ra -e wlan.seq -e wlan_radio.signal_dbm -e wlan.fc.type_subtype’
p = subprocess.Popen(command, stdout=subprocess.PIPE, shell=True)
for packet in iter(p.stdout.readline, b’‘):
packet_string = packet.rstrip().decode(“utf-8") #bytes to string
packet_info = re.split(' |\t’, packet_string) #extract info probe request
print(“PCKT String: “,packet_info)
When I remove the -f filter everything works fine, but when I add it the program seems stuck before the for loop.
I've solved it adding -l in the command
I'm writing two scripts, the first in bash and the second one in Python. The desired output is an IP address and the port number on the same line without spaces like
ip:port
Here's the bash:
#! /bin/sh
echo $(find /u01/ -name config.xml |grep -v bak| xargs grep -A4 AdminServer | grep listen-address | cut -d'>' -f 2 | cut -d'<' -f 1)
and its output
172.31.138.15
The Python:
import os
import sys
from java.lang import System
import getopt
import time
values = os.popen(str('sh /home/oracle/scripts/wls/adminurl.sh'))
url = str("".join(map(str, values)))
port = ":7001"
adminurl = url + port + "\n"
def connectToDomain():
try:
if ServerName != "" or username == "" and password == "" and adminUrl == "":
print (adminurl)
connect(userConfigFile='/home/oracle/scripts/wls/userconfig.secure', userKeyFile='/home/oracle/scripts/wls/userkey.secure', url=adminurl, timeout=60000)
[...]
and its output
Initializing WebLogic Scripting Tool (WLST) ...
Welcome to WebLogic Server Administration Scripting Shell
Type help() for help on available commands
172.31.138.15
:7001
Connecting to t3://172.31.138.15
:7001
with userid weblogic ...
This Exception occurred at Fri Jan 10 18:00:22 CET 2020.
javax.naming.ServiceUnavailableException: 172.31.138.15
: unknown error [Root exception is java.net.UnknownHostException: 172.31.138.15
: unknown error]
The domain is unreacheable
I need the ip value on the same line as the port value so that 'adminurl' is recognized as an argument within the 'connect' function.
Any help is appreciated!
adminurl = url.rstrip() + port + "\n"
I have the following nmap command:
nmap -n -p 25 10.11.1.1-254 --open | grep '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\.[0-9]\{1,3\}' | cut -d" " -f5
This produces a list of ip addresses which I'm trying to pass to the following python script:
#!/usr/bin/python
# Python tool to check a range of hosts for SMTP servers that respond to VRFY requests
import socket
import sys
from socket import error as socket_error
# Read the username file
with open(sys.argv[1]) as f:
usernames = f.read().splitlines()
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
host_ip = sys.argv[2]
print("****************************")
print("Results for: " + host_ip)
try:
c = s.connect((host_ip,25))
banner=s.recv(1024)
#Send VRFY requests and print result
for user in usernames:
s.send('VRFY ' + user + '\r\n')
result = s.recv(1024)
print(result)
print("****************************")
#Close Socket
s.close()
#If error is thrown
except socket_error as serr:
print("\nNo SMTP verify for " +host_ip)
print("****************************")
I've tried to do this with the following command, however it's only running the script over the first ip that it finds:
./smtp_verify.py users.txt $(nmap -n -p 25 10.11.1.1-254 --open | grep '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\.[0-9]\{1,3\}' | cut -d" " -f5)
I've also tried to do this with:
for $ip in (nmap -n -p 25 10.11.1.1-254 --open | grep '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\.[0-9]\{1,3\}' | cut -d" " -f5); do ./smtp_verify.py users.txt $ip done
However I receive a syntax error for it which suggests to me I can't pass pipes this way?
bash: syntax error near unexpected token `('
Do not consciously use for loop for parsing command output, see DontReadLinesWithFor, rather use a Process-Subtitution syntax with a while loop
#!/bin/bash
while IFS= read -r line; do
./smtp_verify.py users.txt "$line"
done< <(nmap -n -p 25 10.11.1.1-254 --open | grep '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\.[0-9]\{1,3\}' | cut -d" " -f5)
And for the error you are likely seeing, you are NOT using command-substitution $(..) syntax properly to run the piped commands, the commands should have been enclosed around () with a $ before it. Something like,
#!/bin/bash
for ip in $(nmap -n -p 25 10.11.1.1-254 --open | grep '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\.[0-9]\{1,3\}' | cut -d" " -f5); do
./smtp_verify.py users.txt "$ip"
done
And remember to always double-quote shell variables to avoid Word Splitting done by the shell.
I'm trying to prevent uploads to S3 in case any previous pipelined command will fail, unfortunately none of these two methods works as expected:
Shell pipeline
for database in sorted(databases):
cmd = "bash -o pipefail -o errexit -c 'mysqldump -B {database} | gpg -e -r {GPGRCPT} | gof3r put -b {S3_BUCKET} -k {database}.sql.e'".format(database = database, GPGRCPT = GPGRCPT, S3_BUCKET = S3_BUCKET)
try:
subprocess.check_call(cmd, shell = True, executable="/bin/bash")
except subprocess.CalledProcessError as e:
print e
Popen with PIPEs
for database in sorted(databases):
try:
cmd_mysqldump = "mysqldump {database}".format(database = database)
p_mysqldump = subprocess.Popen(shlex.split(cmd_mysqldump), stdout=subprocess.PIPE)
cmd_gpg = "gpg -a -e -r {GPGRCPT}".format(GPGRCPT = GPGRCPT)
p_gpg = subprocess.Popen(shlex.split(cmd_gpg), stdin=p_mysqldump.stdout, stdout=subprocess.PIPE)
p_mysqldump.stdout.close()
cmd_gof3r = "gof3r put -b {S3_BUCKET} -k {database}.sql.e".format(S3_BUCKET = S3_BUCKET, database = database)
p_gof3r = subprocess.Popen(shlex.split(cmd_gof3r), stdin=p_gpg.stdout, stderr=open("/dev/null"))
p_gpg.stdout.close()
except subprocess.CalledProcessError as e:
print e
I tried something like this with no luck:
....
if p_gpg.returncode == 0:
cmd_gof3r = "gof3r put -b {S3_BUCKET} -k {database}.sql.e".format(S3_BUCKET = S3_BUCKET, database = database)
p_gof3r = subprocess.Popen(shlex.split(cmd_gof3r), stdin=p_gpg.stdout, stderr=open("/dev/null"))
p_gpg.stdout.close()
...
Basically gof3r is streaming data to S3 even if there are errors, for instance when I intentionally change mysqldump -> mysqldumpp to generate an error.
I had the exact same question, and I managed it with:
cmd = "cat file | tr -d '\\n'"
subprocess.check_call( [ '/bin/bash' , '-o' , 'pipefail' , '-c' , cmd ] )
Thinking back, and searching in my code, I used another method too:
subprocess.check_call( "ssh -c 'make toto 2>&1 | tee log.txt ; exit ${PIPESTATUS[0]}'", shell=True )
All commands in a pipeline run concurrently e.g.:
$ nonexistent | echo it is run
the echo is always run even if nonexistent command does not exist.
pipefail affects the exit status of the pipeline as a whole -- it does not make gof3r exit any sooner
errexit has no effect because there is a single pipeline here.
If you meant that you don't want to start the next pipeline if the one from the previous iteration fails then put break after print e in the exception handler.
p_gpg.returncode is None while gpg is running. If you don't want gof3r to run if gpg fails then you have to save gpg's output somewhere else first e.g., in a file:
filename = 'gpg.out'
for database in sorted(databases):
pipeline_no_gof3r = ("bash -o pipefail -c 'mysqldump -B {database} | "
"gpg -e -r {GPGRCPT}'").format(**vars())
with open(filename, 'wb', 0) as file:
if subprocess.call(shlex.split(pipeline_no_gof3r), stdout=file):
break # don't upload to S3, don't run the next database pipeline
# upload the file on success
gof3r_cmd = 'gof3r put -b {S3_BUCKET} -k {database}.sql.e'.format(**vars())
with open(filename, 'rb', 0) as file:
if subprocess.call(shlex.split(gof3r_cmd), stdin=file):
break # don't run the next database pipeline
I am trying to use the subprocess to execute a command socat. It just echos the command as 'show stat | socat unix-connect:/users/viperias/Applications/haproxy.stat stdio' rather than printing the output of the command. It perfectly works for the other commands. Am I missing something?
def main():
cmd = "echo show stat | socat unix-connect:/users/viperias/Applications/haproxy.stat stdio"
ps = subprocess.Popen(cmd,shell=True,stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
output = ps.communicate()[0]
print (output)
main()