Using nslookup to find domain name and only the domain name - python

Currently I have a text file with mutiple IP's I am currently attempting to pull only the domain name from the set of information given using nslookup (code below)
with open('test.txt','r') as f:
for line in f:
print os.system('nslookup' + " " + line)
This works in so far that it pulls all the information from the first IP's. I can't get it passed the first IP but I'm currently attempting to clean up the information recived to only the Domain name of the IP. Is there any way to do that or do I need to use a diffrent module

Like IgorN, I wouldn't make a system call to use nslookup; I would also use socket. However, the answer shared by IgorN provides the hostname. The requestor asked for the domain name. See below:
import socket
with open('test.txt', 'r') as f:
for ip in f:
fqdn = socket.gethostbyaddr(ip) # Generates a tuple in the form of: ('server.example.com', [], ['127.0.0.1'])
domain = '.'.join(fqdn[0].split('.')[1:])
print(domain)
Assuming that test.txt contains the following line, which resolves to a FQDN of server.example.com:
127.0.0.1
this will generate the following output:
example.com
which is what (I believe) the OP desires.

import socket
name = socket.gethostbyaddr(‘127.0.0.1’)
print(name) #to get the triple
print(name[0]) #to just get the hostname

Related

Python replace line in text file

I am trying to manage a host file with a python script. I am new to python and I am having a hard time with figuring out how to replace a line if I find a match. For example, if the address gets changed in a host file for a website I want the script to find it and change it back. Thanks for your help.
import os
import time
#location to the host file to read and write to
hosts_path=r"C:\Windows\System32\drivers\etc\hosts"
#the address I want for the sites
redirect="0.0.0.0"
#the websites that I will set the address for
website_list=["portal.citidirect.com","www.bcinet.nc","secure.banque-tahiti.pf","www.bancatlan.hn","www.bancentro.com.ni","www.davivienda.com.sv","www.davivienda.cr","cmo.cibc.com","www.bi.com.gt","empresas.banistmo.com","online.belizebank.com","online.westernunion.com","archive.clickatell.com"]
#continuous loop
while True:
with open(hosts_path,'r+') as file:
content=file.read()
#for each of the websites in the list above make sure they are in the host file with the correct address
for website in website_list:
site=redirect+" "+ website
#here is where I have an issue, if the website is in the host file but with the wrong address I want to write over the line, instead the program is adding it to the end of the file
if website in content:
if site in content:
pass
else:
file.write(site)
else:
file.write("\n"+site)
time.sleep(300)
os.system('ipconfig /flushdns')
You need to read the file into a list, then changes the index of the list if it needs to be, then writes the list back to the file. What you are doing was just writing to the end of the file. You can’t change a file directly like that. You need to record the changes in a list then write the list. I ended up having to re-write a lot of the code. Here's the full script. I wasn't sure what the os.system('ipconfig /flushdns') was accomplishing, so I removed it. You can easily add it back where you want.
#!/usr/bin/env python3.6
import time
hosts_path = r"C:\\Windows\\System32\\drivers\\etc\\hosts"
redirect = "0.0.0.0"
website_list = [
"portal.citidirect.com",
"www.bcinet.nc",
"secure.banque-tahiti.pf",
"www.bancatlan.hn",
"www.bancentro.com.ni",
"www.davivienda.com.sv",
"www.davivienda.cr",
"cmo.cibc.com",
"www.bi.com.gt",
"empresas.banistmo.com",
"online.belizebank.com",
"online.westernunion.com",
"archive.clickatell.com"]
def substring_in_list(the_list, substring):
for s in the_list:
if substring in s:
return True
return False
def write_websites():
with open(hosts_path, 'r') as file:
content = file.readlines()
for website in website_list:
site = "{} {}\n".format(redirect, website)
if not substring_in_list(content, website):
content.append(site)
else:
for line in content:
if site in line:
pass
elif website in line:
line = site
with open(hosts_path, "w") as file:
file.writelines(content)
while True:
write_websites()
time.sleep(300)
So, you're going to assign the same IP address to every site that doesn't appear in your websites list?
The following would replace what's inside your outermost while loop:
# Read in all the lines from the host file,
# splitting each into hostname, IPaddr and aliases (if any),
# and trimming off leading and trailing whitespace from
# each of these components.
host_lines = [[component.strip() for component in line.split(None, 2)] for line in open(host_path).readlines()]
# Process each of the original lines.
for line in host_lines:
# Is the site in our list?
if line[1] in website_list:
# Make sure the address is correct ...
if line[0] != redirect:
line[0] == redirect
# We can remove this from the websites list.
website_list.remove(line[1])
# Whatever sites are left in websites don't appear
# in the hosts file. Add lines for these to host_lines
host_lines.extend([[redirect, site] for site in website_list])
# Write the host_lines back out to the hosts file:
open(hosts_path, 'w').write("\n".join([" ".join(line) for line in host_lines]))
The rightmost join glues the components of each line back together into a single string. The join to the left of it glues all of these strings together with newline characters between them, and writes this entire string to the file.
I have to say, this looks like a rather complicated and even dangerous way to make sure your hosts file stays up-to-date and accurate. Wouldn't it be better to just have a cron job scp a known-good hosts file from a trusted host every five minutes instead?
I ended up mixing some of the responses to create a new file to replace the current host file using functions as shown below. In addition to this code I am using pyinstaller to turn it into an exe then I setup that exe to run as a auto-start service.
#!/usr/bin/env python3.6
import os
import shutil
import time
temp_file = r"c:\temp\Web\hosts"
temp_directory="c:\temp\Web"
hosts_path = r"C:\Windows\System32\drivers\etc\hosts"
websites = ('''# Copyright (c) 1993-2009 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
#
# For example:
#
# 102.54.94.97 rhino.acme.com # source server
# 38.25.63.10 x.acme.com # x client host
# localhost name resolution is handled within DNS itself.
# 127.0.0.1 localhost
# ::1 localhost
0.0.0.0 portal.citidirect.com
0.0.0.0 www.bcinet.nc
0.0.0.0 secure.banque-tahiti.pf
0.0.0.0 www.bancatlan.hn
0.0.0.0 www.bancentro.com.ni
0.0.0.0 www.davivienda.com.sv
0.0.0.0 www.davivienda.cr
0.0.0.0 cmo.cibc.com
0.0.0.0 www.bi.com.gt
0.0.0.0 empresas.banistmo.com
0.0.0.0 online.belizebank.com
0.0.0.0 online.westernunion.com
0.0.0.0 archive.clickatell.com''')
def write_websites():
with open(temp_file, 'w+') as file:
file.write(websites)
while True:
if not os.path.exists(temp_directory):
os.makedirs(temp_directory)
try:
os.remove(temp_file)
except OSError:
pass
write_websites()
try:
os.remove(hosts_path)
except OSError:
pass
try:
shutil.move(temp_file,hosts_path)
except OSError:
pass
os.system('ipconfig /flushdns')
time.sleep(300)

find country from full domain name

I am writing an script to analyse the countries of a list of domain names(e.g. third.second.first). The data set is pretty old and many of the fully qualified domain names cannot be found via socket.gethostbyname(domain_str) in python. Here are some of the alternatives I come up with:
Retrieving the ip of second.first if the ip of third.second.first
cannot be found and then find the country of that ip
This seems not to be a good idea since a dns A-record can map a subdomain to an ip different from its primary domain.
detect the country code of the domain name. e.g. if it is ..jp, it is from Japan
My questions are:
Is the first method acceptable ?
are there other methods to retrieve the country information of a domain name ?
Thank you.
I would recommend using the geolite2 module:
https://pypi.python.org/pypi/maxminddb-geolite2
So you could do something like this:
#!/usr/bin/python
import socket
from geolite2 import geolite2
def origin(ip, domain_str, result):
print("{0} [{1}]: {2}".format(domain_str.strip(), ip, result))
def getip(domain_str):
ip = socket.gethostbyname(domain_str.strip())
reader = geolite2.reader()
output = reader.get(ip)
result = output['country']['iso_code']
origin(ip, domain_str, result)
with open("/path/to/hostnames.txt", "r") as ins:
for domain_str in ins:
try:
getip(domain_str)
except socket.error as msg:
print("{0} [could not resolve]".format(domain_str.strip()))
if len(domain_str) > 2:
subdomain = domain_str.split('.', 1)[1]
try:
getip(subdomain)
except:
continue
geolite2.close()
Output:
bing.com [204.79.197.200]: US
dd15-028.compuserve.com [could not resolve]
compuserve.com [149.174.98.149]: US
google.com [172.217.11.78]: US

Regex to find consecutive IP Addresses

I finally have to throw in the towel after working with this for quite some time today. I am trying to retrieve all the IP addresses from a output that looks like this:
My Address: 10.10.10.1
Explicit Route: 192.168.238.90 192.168.252.209 192.168.252.241 192.168.192.209
192.168.192.223
Record Route:
I need to pull all the IP addresses between from 'Explicit Route' and 'Record Route'. I am using textfsm and I seem not to be able to get everything I need.
Use regex and string operations:
import re
s = '''My Address: 10.10.10.1
Explicit Route: 192.168.238.90 192.168.252.209 192.168.252.241 192.168.192.209
192.168.192.223
Record Route:'''
ips = re.findall(r'\d+\.\d+\.\d+\.\d+', s[s.find('Explicit Route'):s.find('Record Route')])
import re
with open('file.txt', 'r') as file:
f = file.read().splitlines()
for line in f:
found = re.findall(r'(?:[\d]{1,3})\.(?:[\d]{1,3})\.(?:[\d]{1,3})\.(?:[\d]{1,3})', line)
for f in found:
print(f)
Edit:
We open the txt and read by line, then for each line using regular exp. to find the ips ( can have 1-3 numbers, then . and repeat 4 times)

Extract data from a field in a text file in Python

I am new to Python. I want to know what is the best way to extract data from a field in a text file?
My text file saves the information of a network. It looks like this:
Name: Machine_1 Status: On IP:10.0.0.1
Name: Machine_2 Status: On IP:10.0.0.2
Network_name: Private Router_name: router1 Router_ID=3568
Subnet: Tenant A
The file is not very structured. It cannot even be expressed as a CSV file due to non-homogeneous nature of rows i.e. all of them do not have the same column identifiers.
What I want to do is to be able to get the value of any field I want e.g. Router_ID.
Please help me find a solution to this.
Thanks.
You could use regular expressions to scan through your file. You'd have to define a regular expression for each field you want to extract. For example:
import re
data = """Name: Machine_1 Status: On IP:10.0.0.1
Name: Machine_2 Status: On IP:10.0.0.2
Network_name: Private Router_name: router1 Router_ID=3568
Subnet: Tenant A"""
for line in data.split('\n'):
ip = re.match('.*IP:(\d+.\d+.\d+.\d+)', line)
rname = re.match('.*Router_name: (\w+)', line)
if ip and ip.lastindex > 0:
print(ip.group(1))
if rname and rname.lastindex > 0:
print(rname.group(1))
Output:
10.0.0.1
10.0.0.2
router1

Python requests fails to get webpages

I am using Python3 and the package requests to fetch HTML data.
I have tried running the line
r = requests.get('https://github.com/timeline.json')
, which is the example on their tutorial, to no avail. However, when I run
request = requests.get('http://www.math.ksu.edu/events/grad_conf_2013/')
it works fine. I am getting errors such as
AttributeError: 'MockRequest' object has no attribute 'unverifiable'
Error in sys.excepthook:
I am thinking the errors have something to do with the type of webpage I am attempting to get, since the html page that is working is just basic html that I wrote.
I am very new to requests and Python in general. I am also new to stackoverflow.
As a little example, here is a little tool which I developed in order to fetch data from a website, in this case IP and show it:
# Import the requests module
# TODO: Make sure to install it first
import requests
# Get the raw information from the website
r = requests.get('http://whatismyipaddress.com')
raw_page_source_list = r.text
text = ''
# Join the whole list into a single string in order
# to simplify things
text = text.join(raw_page_source_list)
# Get the exact starting position of the IP address string
ip_text_pos = text.find('IP Information') + 62
# Now extract the IP address and store it
ip_address = text[ip_text_pos : ip_text_pos + 12]
# print 'Your IP address is: %s' % ip_address
# or, for Python 3 ... #
# print('Your IP address is: %s' % ip_address)

Categories