I have a list of IP's that I want to run a whois (using the linux tool whois) against and only see the Country option.
Here is my script:
import os
import time
iplist = open('ips.txt').readlines()
for i in iplist:
time.sleep(2)
print "Country: IP {0}".format(i)
print os.system("whois -h whois.arin.net + {0} | grep Country ".format(i))
So I want to display what IP is being ran, then I just want to see the Country info using grep. I see this error when I run it and the grep is not ran:
sh: -c: line 1: syntax error near unexpected token `|'
sh: -c: line 1: ` | grep Country '
this code below works so it must be an issue with my for loop:
print os.system("whois -h whois.arin.net + {0} | grep Country ".format('8.8.8.8'))
What am I doing wrong? Thank you!!!!
You're not stripping trailing newlines from the lines you read from the file. As a result, you are passing to os.system a string like "whois -h whois.arin.net + a.b.c.d\n | grep Country". The shell parses the string as two commands and complains of "unexpected token |" at the beginning of the second one. This explains why there is no error when you use a hand-made string such as "8.8.8.8".
Add i = i.strip() after the sleep, and the problem will go away.
user4815162342 is correct about the issue you are having, but might I suggest you replace os.system with subprocess.Popen? Capturing the output from the system call is not intuitive.. should you want to result to go anywhere but your screen, you'll likely going to have issues
from subprocess import Popen, PIPE
server = 'whois.arin.net'
def find_country(ip):
proc = Popen(['whois', '-h', server, ip], stdout = PIPE, stderr = PIPE)
stdout, stderr = proc.communicate()
if stderr:
raise Exception("Error with `whois` subprocess: " + stderr)
for line in stdout.split('\n'):
if line.startswith('Country:'):
return line.split(':')[1].strip() # Good place for regex
for ip in [i.strip() for i in open('ips.txt').readlines()]:
print find_country(ip)
Python is awesome at string handling- there should be no reason to create a grep subprocess to pattern match the output of a separate subprocess.
Try sh:
import os
import time
import re
import sh
iplist = open('ips.txt').readlines()
for i in iplist:
time.sleep(2)
print "Country: IP {0}".format(i)
print sh.grep(sh.whois(i, h="whois.arin.net"), "Country")
Related
I want to run a python script file with this ping -t www.google.com command.
So far I did one with the ping www.google.com command which works, but I didn't suceed to do on with the ping -t looping indefinitely.
You can find below my ping.py script:
import subprocess
my_command="ping www.google.com"
my_process=subprocess.Popen(my_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
result, error = my_process.communicate()
result = result.strip()
error = error.strip()
print ("\nResult of ping command\n")
print("-" *22)
print(result.decode('utf-8'))
print(error.decode('utf-8'))
print("-" *22)
input("Press Enter to finish...")
I want the command box to stay open when finished. I am using Python 3.7.
If you want to keep the process open and communicate with it all the time, you can use my_process.stdout as input and e.g. iterate over it's lines. With "communicate" you wait until the process completes, which would be bad for an indefinitely running process :)
import subprocess
my_command=["ping", "www.google.com"]
my_process=subprocess.Popen(my_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
while True:
print( my_process.stdout.readline() )
EDIT
In this version we use re to only get the "time=xxms" part from the output:
import subprocess
import re
my_command=["ping", "-t", "www.google.com"]
my_process=subprocess.Popen(my_command, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
while True:
line = my_process.stdout.readline() #read a line - one ping in this case
line = line.decode("utf-8") #decode the byte literal to string
line = re.sub("(?s).*?(time=.*ms).*", "\\1", line) #find time=xxms in the string and use only that
print(line)
I am working on a script that takes a text file containing IP addresses (one per line) and then passes each IP into a non-Python program command.
The result is an error:
TypeError: not all arguments converted during string formatting
import subprocess
list='c:\cmc_list.txt'
with open(list,'r') as cmc_list:
for i in cmc_list:
racadm_command = "racadm -r %s -u root -p calvin getslotname" % i
output = subprocess.Popen(racadm_command % i, stdout = subprocess.PIPE,
shell=True).communicate()[0]
print(racadm_command, output)
The string passed to the Popen command has already been formatted, so it has no % left to consume the i. Take away the "% i" and I think you'll be fine.
I believe you meant to do the formatting only once
import subprocess
cmc_list='c:\cmc_list.txt'
racadm_command_template = "racadm -r {} -u root -p calvin getslotname"
with open(cmc_list,'r') as f:
for ip in f:
cmd = racadm_command_template.format(ip)
output = subprocess.Popen(cmd, stdout = subprocess.PIPE, shell=True).communicate()
print(cmd, output[0])
I also suggest using getpass for your password prompt, or import from an environment variable. Additionally, don't print out the password and please change it from the default
I'm doing os.system to tail for a live file and grep for a string
How can I execute something when the grep succeeds?
For example
cmd= os.system(tail -f file.log | grep -i abc)
if (cmd):
#Do something and continue tail
Is there any way I can do this? It will only come to the if block when the os.system statement is completed.
You can use subprocess.Popen and read lines from stdout:
import subprocess
def tail(filename):
process = subprocess.Popen(['tail', '-F', filename], stdout=subprocess.PIPE)
while True:
line = process.stdout.readline()
if not line:
process.terminate()
return
yield line
For example:
for line in tail('test.log'):
if line.startswith('error'):
print('Error:', line)
I am not sure that you really need to do this in python - perhaps it would e easier to pipe the tail-f output into awk: https://superuser.com/questions/742238/piping-tail-f-into-awk
If you want to work in python (because you need to do some processing afterwards) then check this link on how to use tail -f: How can I tail a log file in Python?
I am new to python and working on trying to make a script which checks if a specified host as for example sensu-client exist. I use a deployment software called NSO and run it by: nso status and it shows me this information:
nagios-client host nagios-client down
test host test down
Is there any possibility to make a script to check if for example nagios-Client exist with a script ?
In shell I do it by:
nso status | awk '{ print $1 }'
In this case I would suggest using subprocess' check_output function. The documentation is here. check_output can return, as a string the shell output of a command. So you would have something like this:
import subprocess
foo=subprocess.check_output(['nso', 'status', '|', 'awk', '\'{ print $1 }\''], shell=True)
#Thanks bereal for shell=True
print foo
Of course, if your only targeting linux, you could use the much easier sh module. It allows you to import programs as if they were libraries.
you can use subprocess to run this command and parse the output
import subprocess
command = ['nso', 'status', '|', 'awk', '\'{ print $1 }\'']
p1 = subprocess.Popen(command, stdout=subprocess.PIPE)
You don't have to run awk, since you're already in Python:
import subprocess
proc = subprocess.Popen(['nso', 'status'], stdout=subprocess.PIPE)
# get stdout as a EOL-separated string, ignore stderr for now
out, _ = proc.communicate()
# parse the output, line.split()[0] is awk's $1
items = [line.split()[0] for line in out.split('\n')]
I want to run a bash command from python shell.
my bash is:
grep -Po "(?<=<cite>).*?(?=</cite>)" /tmp/file1.txt | awk -F/ '{print $1}' | awk '!x[$0]++' > /tmp/file2.txt
what I tried is:
#!/usr/bin/python
import commands
commands.getoutput('grep ' + '-Po ' + '\"\(?<=<dev>\).*?\(?=</dev>\)\" ' + '/tmp/file.txt ' + '| ' + 'awk \'!x[$0]++\' ' + '> ' + '/tmp/file2.txt')
But I don't have any result.
Thank you
If you want to avoid splitting your arguments and worrying about pipes, you can use the shell=True option:
cmd = "grep -Po \"(?<=<dev>).*?(?=</dev>)\" /tmp/file.txt | awk -F/ '{print $1}' | awk '!x[$0]++' > file2.txt"
out = subprocess.check_output(cmd, shell=True)
This will run a subshell which will understands all your directives, including "|" for piping, ">" for redirection. If you do not do this, these symbols normally parsed by the shell will just be passed to grep program.
Otherwise, you have to create the pipes yourself. For example (untested code below):
grep_p = subprocess.Popen(["grep", "-Po", "(?<=<dev>).*?(?=</dev>)", "/tmp/file.txt"], stdout=subprocess.PIPE)
awk_p = subprocess.Popen(["awk", "-F/", "'{print $1}'"], stdin = grep_p.stdout)
file2_fh = open("file2.txt", "w")
awk_p_2 = subprocess.Popen(["awk", "!x[$0]++", stdout = file2_fh, stdin = awk_p.stdout)
awk_p_2.communicate()
However, you're missing the point of python if you are doing this. You should instead look into the re module: re.match, re.sub, re.search, though I'm not familiar enough with awk to translate your commands.
The recommend way to run system commands in python is to use the module subprocess.
import subprocess
a=['grep' ,'-Po', '"(?<=<dev>).*?(?=</dev>)"','/tmp/file.txt']
b=['awk', '-F/', '"{print $1}"']
c=["awk", '"!x[$0]++"']
p1 = subprocess.Popen(a,stdout=subprocess.PIPE)
p2 = subprocess.Popen(b,stdin=p1.stdout,stdout=subprocess.PIPE)
p3 = subprocess.Popen(c,stdin=p2.stdout,stdout=subprocess.PIPE)
p1.stdout.close()
p2.stdout.close()
out,err=p3.communicate()
print out
The point of creating pipes between each subprocess is for security and debugging reasons. Also it makes the code much clearer in terms, which process gets input and sends output to.
Let us write a simple function to easily deal with these messy pipes for us:
def subprocess_pipes (pipes, last_pipe_out = None):
import subprocess
from subprocess import PIPE
last_p = None
for cmd in pipes:
out_pipe = PIPE if not (cmd==pipes[-1] and last_pipe_out) else open(last_pipe_out, "w")
cmd = cmd if isinstance(cmd, list) else cmd.split(" ")
in_pipe = last_p.stdout if last_p else None
p = subprocess.Popen(cmd, stdout = out_pipe, stdin = in_pipe)
last_p = p
comm = last_p.communicate()
return comm
Then we run,
subprocess_pipes(("ps ax", "grep python"), last_pipe_out = "test.out.2")
The result is a "test.out.2" file with the contents of piping "ps ax" into "grep python".
In your case,
a = ["grep", "-Po", "(?<=<cite>).*?(?=</cite>)", "/tmp/file1.txt"]
b = ["awk", "-F/", "{print $1}"]
c = ["awk", "!x[$0]++"]
subprocess_pipes((a, b, c), last_pipe_out = "/tmp/file2.txt")
The commands module is obsolete now.
If you don't actually need the output of your command you can use
import os
exit_status = os.system("your-command")
Otherwise you can use
import suproccess
out, err = subprocess.Popen("your | commands", stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell = True).communicate()
Note: for your command you send stdout to file2.txt so I wouldn't expect to see anything in out you will however still see error messages on stderr which will go into err
you must use
import os
os.system(command)
I think what you are looking for is something like:
ubprocess.check_output(same as popen arguments, **kwargs) , use it the same way you would use a popen command , it should show you the output of the program that's being called.
For more details here is a link: http://freefilesdl.com/how-to-call-a-shell-command-from-python/