Python - get TLD - python

I have a problem in function which should remove tld from domain. If domain has some subdomain it works correctly. For example:
Input: asdf.xyz.example.com
Output: asdf.xyz.example
Problem is when the domain has not any subdomain, there is dot in front of domain
Input: example.com
Output: .example
This is my code:
res = get_tld(domain, as_object=True, fail_silently=True, fix_protocol=True)
domain = '.'.join([res.subdomain, res.domain])
Function get_tld is from tld library
Could someone help me how to solve this problem?

With a very simple string manipulation, is this what you are looking for?
d1 = 'asdf.xyz.example.com'
output = '.'.join(d1.split('.')[:-1])
# output = 'asdf.xyz.example'
d2 = 'example.com'
output = '.'.join(d2.split('.')[:-1])
# output = 'example'

You can use filtering. It looks like get_tld works as intended but join is incorrect
domain = '.'.join(filter(lambda x: len(x), [res.subdomain, res.domain]))

another simple version is this:
def remove_tld(url):
*base, tld = url.split(".")
return ".".join(base)
url = "asdf.xyz.example.com"
print(remove_tld(url)) # asdf.xyz.example
url = "example.com"
print(remove_tld(url)) # example
*base, tld = url.split(".") puts the TLD in tld and everything else in base. then you just join tĥat with ".".join(base).

Related

Python regex to find functions and params pairs in js files

I am writing a JavaScript crawler application.
The application needs to open JavaScript files and find some specific code in order to do some stuff with them.
I am using regular expressions to find the code of interest.
Consider the following JavaScript code:
let nlabel = rs.length ? st('string1', [st('string2', ctx = 'ctx2')], ctx = 'ctx1') : st('Found {0}', [st(this.param)]);
As you can see there is the st function which is called three times in the same line. The first two calls have an extra parameter named ctx but the third one doesn't have it.
What I need to do is to have 3 re matches as below:
Match 1
Group: function = "st('"
Group: string = "string1"
Group: ctx = "ctx1"
Match 2
Group: function = "st('"
Group: string = "string2"
Group: ctx = "ctx2"
Match 3
Group: function = "st('"
Group: string = "Found {0}"
Group: ctx = (None)
I am using the regex101.com to test my patterns and the pattern that gives the closest thing to what I am looking for is the following:
(?P<function>st\([\"'])(?P<string>.+?(?=[\"'](\s*,ctx\s*|\s*,\s*)))
You can see it in action here.
However, I have no idea how to make it return the ctx group the way I want it.
For your reference I am using the following Python code:
matches = []
code = "let nlabel = rs.length ? st('string1', [st('string2', ctx = 'ctx2')], ctx = 'ctx1') : st('Found {0}', [st(this.param)], ctx = 'ctxparam'"
pattern = "(?P<function>st\([\"'])(?P<string>.+?(?=[\"'](\s*,ctx\s*|\s*,\s*)))"
for m in re.compile(pattern).finditer(code):
fnc = m.group('function')
msg = m.group('string')
ctx = m.group('ctx')
idx = m.start()
matches.append([idx, fnc, msg, ctx])
print(matches)
I have the feeling that re alone isn't capable to do exactly what I am looking for but any suggestion/solution which gets closer is more than welcome.

Check If Someone Send Link + code discord.py rewrite

I want To Check If Someone Send Link + code
like this link
```
https://pastebin.com/gD0KD6u4
```
after the last / is the code
As if the code is: gD0KD6u4
Check this code I believe this would be helpful.
link = 'https://pastebin.com/gD0KD6u4'
# reverse the string
reverse_link = link[::-1]
print(reverse_link)
position = reverse_link.find('/')
code_link = reverse_link[:position]
#reverse code
print(code_link)
code_link = code_link[::-1]
print(code_link)
Another solution is to use rpartition():
url = 'https://pastebin.com/gD0KD6u4'
decompose = url.rpartition('/')
url = decompose[0]
code = decompose[-1]
print(url, code)

How can I split a URL into three different variables

I want to split a URL into three strings.
Example:
https://www.google.com:443
http://amazon.com:467
I would like the output to be:
string 1: https or http
string 2: www.google.com or amazon.com
string 3: 443 or 467
The above output is based on the example provided. Basically I want to split the string into protocol, domain and port and assign to three different variables.
ULRs are more complicated than one might think which is why it's generally a good idea to use proven code to parse them and handle unexpected edge cases. Python has urllib.parse in the library, which you should use rather than trying to parse this your self.
The parts you want are in the scheme, hostname, and port properties of the object returned from urlsparse()
For example:
from urllib.parse import urlparse
def getParts(url_string):
p = urlparse(url_string)
return [p.scheme, p.hostname, p.port]
getParts('https://www.google.com:443')
# ['https', 'www.google.com', 443]
getParts('http://amazon.com:467')
# ['http', 'amazon.com', 467]
# surprising, but valid url:
getParts('https://en.wikipedia.org:443/wiki/Template:Welcome')
# ['https', 'en.wikipedia.org', 443]
# missing parts:
getParts('//www.google.com/example/home')
# ['', 'www.google.com', None]
Here you go:
url = 'https://www.google.com:443'
first = url.find(':')
last = url.rfind(':')
protocol = url[:first]
domain = url[first+3:last]
port = url[last+1:]
A 'primitive' method:
from collections import namedtuple
def split_url(url):
split_1 = url.split('://')
split_2 = split_1[1].split(':')
protocol = split_1[0]
domain = split_2[0]
port = split_2[1]
url_split = namedtuple('url_split', ['protocol', 'domain', 'port'])
return url_split(protocol, domain, port)
So, for example:
s = 'https://www.google.com:443'
result = split_url(s)
Then we have:
result.protocol
>> 'https'
result.domain
>> 'www.google.com'
result.port
>> '443'

Get subdomain from URL using Python

For example, the address is:
Address = http://lol1.domain.com:8888/some/page
I want to save the subdomain into a variable so i could do like so;
print SubAddr
>> lol1
Package tldextract makes this task very easy, and then you can use urlparse as suggested if you need any further information:
>>> import tldextract
>>> tldextract.extract("http://lol1.domain.com:8888/some/page"
ExtractResult(subdomain='lol1', domain='domain', suffix='com')
>>> tldextract.extract("http://sub.lol1.domain.com:8888/some/page"
ExtractResult(subdomain='sub.lol1', domain='domain', suffix='com')
>>> urlparse.urlparse("http://sub.lol1.domain.com:8888/some/page")
ParseResult(scheme='http', netloc='sub.lol1.domain.com:8888', path='/some/page', params='', query='', fragment='')
Note that tldextract properly handles sub-domains.
urlparse.urlparse will split the URL into protocol, location, port, etc. You can then split the location by . to get the subdomain.
import urlparse
url = urlparse.urlparse(address)
subdomain = url.hostname.split('.')[0]
Modified version of the fantastic answer here: How to extract top-level domain name (TLD) from URL
You will need the list of effective tlds from here
from __future__ import with_statement
from urlparse import urlparse
# load tlds, ignore comments and empty lines:
with open("effective_tld_names.dat.txt") as tldFile:
tlds = [line.strip() for line in tldFile if line[0] not in "/\n"]
class DomainParts(object):
def __init__(self, domain_parts, tld):
self.domain = None
self.subdomains = None
self.tld = tld
if domain_parts:
self.domain = domain_parts[-1]
if len(domain_parts) > 1:
self.subdomains = domain_parts[:-1]
def get_domain_parts(url, tlds):
urlElements = urlparse(url).hostname.split('.')
# urlElements = ["abcde","co","uk"]
for i in range(-len(urlElements),0):
lastIElements = urlElements[i:]
# i=-3: ["abcde","co","uk"]
# i=-2: ["co","uk"]
# i=-1: ["uk"] etc
candidate = ".".join(lastIElements) # abcde.co.uk, co.uk, uk
wildcardCandidate = ".".join(["*"]+lastIElements[1:]) # *.co.uk, *.uk, *
exceptionCandidate = "!"+candidate
# match tlds:
if (exceptionCandidate in tlds):
return ".".join(urlElements[i:])
if (candidate in tlds or wildcardCandidate in tlds):
return DomainParts(urlElements[:i], '.'.join(urlElements[i:]))
# returns ["abcde"]
raise ValueError("Domain not in global list of TLDs")
domain_parts = get_domain_parts("http://sub2.sub1.example.co.uk:80",tlds)
print "Domain:", domain_parts.domain
print "Subdomains:", domain_parts.subdomains or "None"
print "TLD:", domain_parts.tld
Gives you:
Domain: example
Subdomains: ['sub2', 'sub1']
TLD: co.uk
A very basic approach, without any sanity checking could look like:
address = 'http://lol1.domain.com:8888/some/page'
host = address.partition('://')[2]
sub_addr = host.partition('.')[0]
print sub_addr
This of course assumes that when you say 'subdomain' you mean the first part of a host name, so in the following case, 'www' would be the subdomain:
http://www.google.com/
Is that what you mean?
What you are looking for is in:
http://docs.python.org/library/urlparse.html
for example:
".".join(urlparse('http://www.my.cwi.nl:80/%7Eguido/Python.html').netloc.split(".")[:-2])
Will do the job for you (will return "www.my")
For extracting the hostname, I'd use urlparse from urllib2:
>>> from urllib2 import urlparse
>>> a = "http://lol1.domain.com:8888/some/page"
>>> urlparse.urlparse(a).hostname
'lol1.domain.com'
As to how to extract the subdomain, you need to cover for the case that there FQDN could be longer. How you do this would depend on your purposes. I might suggest stripping off the two right most components.
E.g.
>>> urlparse.urlparse(a).hostname.rpartition('.')[0].rpartition('.')[0]
'lol1'
We can use https://github.com/john-kurkowski/tldextract for this problem...
It's easy.
>>> ext = tldextract.extract('http://forums.bbc.co.uk')
>>> (ext.subdomain, ext.domain, ext.suffix)
('forums', 'bbc', 'co.uk')
tldextract separate the TLD from the registered domain and subdomains of a URL.
Installation
pip install tldextract
For the current question:
import tldextract
address = 'http://lol1.domain.com:8888/some/page'
domain = tldextract.extract(address).domain
print("Extracted domain name : ", domain)
The output:
Extracted domain name : domain
In addition, there is a number of examples which is extremely related with the usage of tldextract.extract side.
First of All import tldextract, as this splits the URL into its constituents like: subdomain. domain, and suffix.
import tldextract
Then declare a variable (say ext) that stores the results of the query. We also have to provide it with the URL in parenthesis with double quotes. As shown below:
ext = tldextract.extract("http://lol1.domain.com:8888/some/page")
If we simply try to run ext variable, the output will be:
ExtractResult(subdomain='lol1', domain='domain', suffix='com')
Then if you want to use only subdomain or domain or suffix, then use any of the below code, respectively.
ext.subdomain
The result will be:
'lol1'
ext.domain
The result will be:
'domain'
ext.suffix
The result will be:
'com'
Also, if you want to store the results only of subdomain in a variable, then use the code below:
Sub_Domain = ext.subdomain
Then Print Sub_Domain
Sub_Domain
The result will be:
'lol1'
Using python 3 (I'm using 3.9 to be specific), you can do the following:
from urllib.parse import urlparse
address = 'http://lol1.domain.com:8888/some/page'
url = urlparse(address)
url.hostname.split('.')[0]
import re
def extract_domain(domain):
domain = re.sub('http(s)?://|(\:|/)(.*)|','', domain)
matches = re.findall("([a-z0-9][a-z0-9\-]{1,63}\.[a-z\.]{2,6})$", domain)
if matches:
return matches[0]
else:
return domain
def extract_subdomains(domain):
subdomains = domain = re.sub('http(s)?://|(\:|/)(.*)|','', domain)
domain = extract_domain(subdomains)
subdomains = re.sub('\.?'+domain,'', subdomains)
return subdomains
Example to fetch subdomains:
print(extract_subdomains('http://lol1.domain.com:8888/some/page'))
print(extract_subdomains('kota-tangerang.kpu.go.id'))
Outputs:
lol1
kota-tangerang
Example to fetch domain
print(extract_domain('http://lol1.domain.com:8888/some/page'))
print(extract_domain('kota-tangerang.kpu.go.id'))
Outputs:
domain.com
kpu.go.id
Standardize all domains to start with www. unless they have a subdomain.
from urllib.parse import urlparse
def has_subdomain(url):
if len(url.split('.')) > 2:
return True
else:
return False
domain = urlparse(url).netloc
if not has_subdomain(url):
domain_name = 'www.' + domain
url = urlparse(url).scheme + '://' + domain

Canonical URL compare in Python?

Are there any tools to do a URL compare in Python?
For example, if I have http://google.com and google.com/ I'd like to know that they are likely to be the same site.
If I were to construct a rule manually, I might Uppercase it, then strip off the http:// portion, and drop anything after the last alpha-numeric character.. But I can see failures of this, as I'm sure you can as well.
Is there a library that does this? How would you do it?
This off the top of my head:
def canonical_url(u):
u = u.lower()
if u.startswith("http://"):
u = u[7:]
if u.startswith("www."):
u = u[4:]
if u.endswith("/"):
u = u[:-1]
return u
def same_urls(u1, u2):
return canonical_url(u1) == canonical_url(u2)
Obviously, there's lots of room for more fiddling with this. Regexes might be better than startswith and endswith, but you get the idea.
There is quite a bit to creating a canonical url apparently.
The url-normalize library is best that I have tested.
Depending on the source of your urls you may wish to clean them of other standard parameters such as UTM codes. w3lib.url.url_query_cleaner is useful for this.
Combining this with Ned Batchelder's answer could look something like:
Code:
from w3lib.url import url_query_cleaner
from url_normalize import url_normalize
urls = ['google.com',
'google.com/',
'http://google.com/',
'http://google.com',
'http://google.com?',
'http://google.com/?',
'http://google.com//',
'http://google.com?utm_source=Google']
def canonical_url(u):
u = url_normalize(u)
u = url_query_cleaner(u,parameterlist = ['utm_source','utm_medium','utm_campaign','utm_term','utm_content'],remove=True)
if u.startswith("http://"):
u = u[7:]
if u.startswith("https://"):
u = u[8:]
if u.startswith("www."):
u = u[4:]
if u.endswith("/"):
u = u[:-1]
return u
list(map(canonical_url,urls))
Result:
['google.com',
'google.com',
'google.com',
'google.com',
'google.com',
'google.com',
'google.com',
'google.com']
You could look up the names using dns and see if they point to the same ip. Some minor string processing may be required to remove confusing chars.
from socket import gethostbyname_ex
urls = ['http://google.com','google.com/','www.google.com/','news.google.com']
data = []
for orginalName in urls:
print 'url:',orginalName
name = orginalName.strip()
name = name.replace( 'http://','')
name = name.replace( 'http:','')
if name.find('/') > 0:
name = name[:name.find('/')]
if name.find('\\') > 0:
name = name[:name.find('\\')]
print 'dns lookup:', name
if name:
try:
result = gethostbyname_ex(name)
except:
continue # Unable to resolve
for ip in result[2]:
print 'ip:', ip
data.append( (ip, orginalName) )
print data
result:
url: http://google.com
dns lookup: google.com
ip: 66.102.11.104
url: google.com/
dns lookup: google.com
ip: 66.102.11.104
url: www.google.com/
dns lookup: www.google.com
ip: 66.102.11.104
url: news.google.com
dns lookup: news.google.com
ip: 66.102.11.104
[('66.102.11.104', 'http://google.com'), ('66.102.11.104', 'google.com/'), ('66.102.11.104', 'www.google.com/'), ('66.102.11.104', 'news.google.com')]
It's not 'fuzzy', it just find the 'distance' between two strings:
http://pypi.python.org/pypi/python-Levenshtein/
I would remove all portions which are semantically meaningful to URL parsing (protocol, slashes, etc.), normalize to lowercase, then perform a levenstein distance, then from there decide how many difference is an acceptable threshold.
Just an idea.

Categories