How to validate LinkedIn public profile url regular expression in python - python

I jus want to validate the linkedin public profile url. I tried the concept like below
a = "https://in.linkedin.com/afadasdf"
p = re.compile('(http(s?)://|[a-zA-Z0-9\-]+\.|[linkedin])[linkedin/~\-]+\.[a-zA-Z0-9/~\-_,&=\?\.;]+[^\.,\s<]')
p.match(a)
The above concept is working fine. But when i give the url https://www.linkedin.com means that it's not working. Can anyone help me to validate both concepts.

It is the oring between the http(s) and www. which has given you the above problem. You could change them to * (i.e. 0 or more).
import re
a = "https://www.linkedin.com/afadasdf"
p = re.compile('((http(s?)://)*([a-zA-Z0-9\-])*\.|[linkedin])[linkedin/~\-]+\.[a-zA-Z0-9/~\-_,&=\?\.;]+[^\.,\s<]')
print p.match(a)
Although you might want to restrict it to www rather than any numbers or letters? So maybe:
p = re.compile('((http(s?)://)*([www])*\.|[linkedin])[linkedin/~\-]+\.[a-zA-Z0-9/~\-_,&=\?\.;]+[^\.,\s<]')

This pattern may help.
^((http|https):\/\/)?+(www.linkedin.com\/)+[a-z]+(\/)+[a-zA-Z0-9-]{5,30}+$
I have tested it and it works fine for me.

Instead of matching the url with a regex you could use the urllib module:
In [1]: import urllib
In [2]: u = "https://in.linkedin.com/afadasdf"
In [3]: urllib.parse.urlparse(u)
Out[3]: ParseResult(scheme='https', netloc='in.linkedin.com', path='/afadasdf', params='', query='', fragment='')
Now you can check for the netloc and path property.

Related

How to do a general search

I'm trying to make a simple browser using PyQt5 (By following a tutorial). It's mostly working except for one tiny problem -
def navigate_to_url(self):
q = QUrl(self.urlbar.text())
print(type(q))
if q.scheme() == "":
q.setScheme("http")
self.tabs.currentWidget().setUrl(q)
Whenever I type something in the address bar it searches it up but it adds a 'http://'. But if I want to search something like 'cats' I want it to work like a normal browser i.e. bring me links that are associated with cats.
Normal Pic:
However because 'https://' is added it gives me a NAME_NOT_RESERVED error.
Error Pic:
Is there any way to fix this?
You could try doing something that checks if it's just a normal word and if so then don't do the http://, for example ive got a txt document with A LOT of English words that you can use to check if its a normal word, like so:
if (re.findall(r'\b'+ re.escape(word1) + r'\b', contents, re.MULTILINE))
assign word1 to your word and contents to the dictionary
here is another example:
import re
with open('dictionary.txt') as fh:
contents = fh.read()
Consider setting the url to the search engine search explicitly if the query does or doesn't match some criteria
At its most basic, you could use urllib.parse.urlparse for this, though it may not be an exact fit for all addresses, as it expects a scheme prefix, which most people don't bother with and let the http(s) be implicitly added by the browser
>>> import urllib.parse
>>> urllib.parse.urlparse("https://example.com") # full example
ParseResult(scheme='https', netloc='example.com', path='', params='', query='', fragment='')
>>> urllib.parse.urlparse("cats") # search works
ParseResult(scheme='', netloc='', path='cats', params='', query='', fragment='')
>>> urllib.parse.urlparse("example.com") # fails for missing scheme
ParseResult(scheme='', netloc='', path='example.com', params='', query='', fragment='')
A quick test for an intended URL without a scheme to hint that an address is a netloc would be if the parsed path contains .
Alternatively, you could add some character (perhaps a space or keyword like d or s before searches)
You may also need to URL encode your string (exchanging for +, ? for %3F, etc.), which can also be done by urllib.parse's urllib.parse.quote_plus
>>> urllib.parse.quote_plus("What does a url-encoded cat query look like?")
'What+does+a+url-encoded+cat+query+look+like%3F'
Duck Duck Go Search Parameters
All together
import urllib.parse
url_search_template = "https://duckduckgo.com/?q={}"
keyword_search = "d "
text = self.urlbar.text()
def probably_a_search(s):
# check for prefix first to prevent matches against a search like 3.1415
if s.startswith(keyword_search):
return True, s[len(keyword_search):] # slice search prefix
parsed_url = urllib.parse.urlparse(s)
if parsed_url.scheme or parsed_url.netloc:
return False, s
if "." in parsed_url.path:
return False, s
return True, s
is_search, text = probably_a_search(text)
if is_search:
text = url_search_template.format(urllib.parse.quote_plus(text.strip()))
q = QUrl(text)
To get a more accurate test against the TLD (rather than the simple presence of .), a 3rd-party library like https://pypi.org/project/tld/ may work better for you

converting url to regex only username and password of the url

can someone please show me how convert only the username and the password of a link with regex ??
link = 'http://test.ddns.net:8000/get.php?username=9OsSVedOky&password=Oz2Vmx9GuW&type=list&output=tr'
url = 'http://test.ddns.net:8000/get.php?username=[a-zA-Z]|[0-9]|[$-_#.&+]|&password=[a-zA-Z]|[0-9]|[$-_#.&+]|&type=list&output=tr'
urls = re.findall(url, link)`
sorry if i'm not using the right terms but i'm new to coding .
thank you
Parsing a url with a regex is in general a bad idea, and especially bad when you have such a poor grasp of the syntax. If you must do it (and the only good reason is because you have been told to), then
>>> import re
>>> rx=re.compile(r"username=(?P<username>[^&]+).*password=(?P<password>[^&]+)")
>>> m = rx.search(link)
>>> m.groupdict()['username']
'9OsSVedOky'
>>> m.groupdict()['password']
'Oz2Vmx9GuW'
But I endorse Rawing's suggestion. It's much better:
>>> import urllib
>>> qsp=urllib.parse.parse_qs(link.partition('?')[2])
>>> qsp['username']
['9OsSVedOky']
>>> qsp['password']
['Oz2Vmx9GuW']

Handling web links in Python

I have some links stored in a file which looks like this:
http://r14---sn-p5qlsnss.googlevideo.com/videoplayback?itag=22&id=o-AOtM1kWozUiJKP2ENWH989ZIfJaZNPVvXTrBkXx40lG5&key=yt6&ip=159.253.144.86&lmt=1480060612064057&dur=1047.870&mv=m&source=youtube&ms=au&ei=DtN8WLfwFsKb1gKXho6YDw&expire=1484597102&mn=sn-p5qlsnss&mm=31&ipbits=0&nh=IgpwcjAzLmlhZDA3KgkxMjcuMC4wLjE&initcwndbps=4717500&mt=1484575249&pl=24&signature=1ECAB2B56C30CBF760721A1A26A7E80963DB36B8.6336B2C9C41DB53C8FA1D2A037793275F57C4825&ratebypass=yes&mime=video%2Fmp4&upn=tUcEt34Qe6c&sparams=dur%2Cei%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cnh%2Cpl%2Cratebypass%2Csource%2Cupn%2Cexpire&title=600ft+UFO+Crash+Site+Discovered+On+Mars%21+11%2F23%2F16
At the end of the link we have the video's title. I want to read this link from a file and get the video's title in a proper format (with those '+' and '%' signs properly resolved). How do I do that?
I cannot use raw cgi as suggested here since the link is read from a file and not submitted by a form. Any idea?
There's super convenient urllib.parse.parse_qs for python 3, but if you're using python 2, you might have to dig out the title string first.
import urllib
url = 'http://r14---sn-p5qlsnss.googlevideo.com/videoplayback?itag=22&id=o-AOtM1kWozUiJKP2ENWH989ZIfJaZNPVvXTrBkXx40lG5&key=yt6&ip=159.253.144.86&lmt=1480060612064057&dur=1047.870&mv=m&source=youtube&ms=au&ei=DtN8WLfwFsKb1gKXho6YDw&expire=1484597102&mn=sn-p5qlsnss&mm=31&ipbits=0&nh=IgpwcjAzLmlhZDA3KgkxMjcuMC4wLjE&initcwndbps=4717500&mt=1484575249&pl=24&signature=1ECAB2B56C30CBF760721A1A26A7E80963DB36B8.6336B2C9C41DB53C8FA1D2A037793275F57C4825&ratebypass=yes&mime=video%2Fmp4&upn=tUcEt34Qe6c&sparams=dur%2Cei%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cnh%2Cpl%2Cratebypass%2Csource%2Cupn%2Cexpire&title=600ft+UFO+Crash+Site+Discovered+On+Mars%21+11%2F23%2F16'
title = url[url.rfind('&title=') + 7:]
print urllib.unquote_plus(title)
Note: thanks to bereal for pointing out parse_qs is also available in python 2, so just:
import urlparse
print urlparse.parse_qs(url)['title'][0]
'600ft UFO Crash Site Discovered On Mars! 11/23/16'
You could use urllib.parse.parse_qs and give it the string:
In [17]: urllib.parse.parse_qs(s)
Out[17]:
{'dur': ['1047.870'],
'ei': ['DtN8WLfwFsKb1gKXho6YDw'],
'expire': ['1484597102'],
'http://r14---sn-p5qlsnss.googlevideo.com/videoplayback?itag': ['22'],
[.. and so on ..]
'source': ['youtube'],
'sparams': ['dur,ei,id,initcwndbps,ip,ipbits,itag,lmt,mime,mm,mn,ms,mv,nh,pl,ratebypass,source,upn,expire'],
'title': ['600ft UFO Crash Site Discovered On Mars! 11/23/16'],
'upn': ['tUcEt34Qe6c']}
In [18]: urllib.parse.parse_qs(s)["title"][0]
Out[18]: '600ft UFO Crash Site Discovered On Mars! 11/23/16'
Purl can fit your needs:
import purl
u = purl.URL('http://r14---sn-p5qlsnss.googlevideo.com/videoplayback?itag=22&id=o-AOtM1kWozUiJKP2ENWH989ZIfJaZNPVvXTrBkXx40lG5&key=yt6&ip=159.253.144.86&lmt=1480060612064057&dur=1047.870&mv=m&source=youtube&ms=au&ei=DtN8WLfwFsKb1gKXho6YDw&expire=1484597102&mn=sn-p5qlsnss&mm=31&ipbits=0&nh=IgpwcjAzLmlhZDA3KgkxMjcuMC4wLjE&initcwndbps=4717500&mt=1484575249&pl=24&signature=1ECAB2B56C30CBF760721A1A26A7E80963DB36B8.6336B2C9C41DB53C8FA1D2A037793275F57C4825&ratebypass=yes&mime=video%2Fmp4&upn=tUcEt34Qe6c&sparams=dur%2Cei%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cnh%2Cpl%2Cratebypass%2Csource%2Cupn%2Cexpire&title=600ft+UFO+Crash+Site+Discovered+On+Mars%21+11%2F23%2F16')
print(u.query_param('title'))
use urlparse.parse_qs:
try:
from urlparse import urlparse # for python2
except:
from urllib import parse as urlparse # for python3
rv = urlparse.parse_qs(link)
title = rv['title'][0]
import urllib
a = "http://r14---sn-p5qlsnss.googlevideo.com/videoplayback?itag=22&id=o-AOtM1kWozUiJKP2ENWH989ZIfJaZNPVvXTrBkXx40lG5&key=yt6&ip=159.253.144.86&lmt=1480060612064057&dur=1047.870&mv=m&source=youtube&ms=au&ei=DtN8WLfwFsKb1gKXho6YDw&expire=1484597102&mn=sn-p5qlsnss&mm=31&ipbits=0&nh=IgpwcjAzLmlhZDA3KgkxMjcuMC4wLjE&initcwndbps=4717500&mt=1484575249&pl=24&signature=1ECAB2B56C30CBF760721A1A26A7E80963DB36B8.6336B2C9C41DB53C8FA1D2A037793275F57C4825&ratebypass=yes&mime=video%2Fmp4&upn=tUcEt34Qe6c&sparams=dur%2Cei%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cnh%2Cpl%2Cratebypass%2Csource%2Cupn%2Cexpire&title=600ft+UFO+Crash+Site+Discovered+On+Mars%21+11%2F23%2F16"
b = a.split('=')[-1]
print urllib.unquote_plus(b)

Parsing hostname and port from string or url

I can be given a string in any of these formats:
url: e.g http://www.acme.com:456
string: e.g www.acme.com:456, www.acme.com 456, or www.acme.com
I would like to extract the host and if present a port. If the port value is not present I would like it to default to 80.
I have tried urlparse, which works fine for the url, but not for the other format. When I use urlparse on hostname:port for example, it puts the hostname in the scheme rather than netloc.
I would be happy with a solution that uses urlparse and a regex, or a single regex that could handle both formats.
You can use urlparse to get hostname from URL string:
from urlparse import urlparse
print urlparse("http://www.website.com/abc/xyz.html").hostname # prints www.website.com
>>> from urlparse import urlparse
>>> aaa = urlparse('http://www.acme.com:456')
>>> aaa.hostname
'www.acme.com'
>>> aaa.port
456
>>>
I'm not that familiar with urlparse, but using regex you'd do something like:
p = '(?:http.*://)?(?P<host>[^:/ ]+).?(?P<port>[0-9]*).*'
m = re.search(p,'http://www.abc.com:123/test')
m.group('host') # 'www.abc.com'
m.group('port') # '123'
Or, without port:
m = re.search(p,'http://www.abc.com/test')
m.group('host') # 'www.abc.com'
m.group('port') # '' i.e. you'll have to treat this as '80'
EDIT: fixed regex to also match 'www.abc.com 123'
The reason it fails for:
www.acme.com 456
is because it is not a valid URI. Why don't you just:
Replace the space with a :
Parse the resulting string by using the standard urlparse method
Try and make use of default functionality as much as possible, especially when it comes to things like parsing well know formats like URI's.
Method using urllib -
from urllib.parse import urlparse
url = 'https://stackoverflow.com/questions'
print(urlparse(url))
Output -
ParseResult(scheme='https', netloc='stackoverflow.com',
path='/questions', params='', query='', fragment='')
Reference - https://www.tutorialspoint.com/urllib-parse-parse-urls-into-components-in-python

urllib2 file name

If I open a file using urllib2, like so:
remotefile = urllib2.urlopen('http://example.com/somefile.zip')
Is there an easy way to get the file name other then parsing the original URL?
EDIT: changed openfile to urlopen... not sure how that happened.
EDIT2: I ended up using:
filename = url.split('/')[-1].split('#')[0].split('?')[0]
Unless I'm mistaken, this should strip out all potential queries as well.
Did you mean urllib2.urlopen?
You could potentially lift the intended filename if the server was sending a Content-Disposition header by checking remotefile.info()['Content-Disposition'], but as it is I think you'll just have to parse the url.
You could use urlparse.urlsplit, but if you have any URLs like at the second example, you'll end up having to pull the file name out yourself anyway:
>>> urlparse.urlsplit('http://example.com/somefile.zip')
('http', 'example.com', '/somefile.zip', '', '')
>>> urlparse.urlsplit('http://example.com/somedir/somefile.zip')
('http', 'example.com', '/somedir/somefile.zip', '', '')
Might as well just do this:
>>> 'http://example.com/somefile.zip'.split('/')[-1]
'somefile.zip'
>>> 'http://example.com/somedir/somefile.zip'.split('/')[-1]
'somefile.zip'
If you only want the file name itself, assuming that there's no query variables at the end like http://example.com/somedir/somefile.zip?foo=bar then you can use os.path.basename for this:
[user#host]$ python
Python 2.5.1 (r251:54869, Apr 18 2007, 22:08:04)
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.path.basename("http://example.com/somefile.zip")
'somefile.zip'
>>> os.path.basename("http://example.com/somedir/somefile.zip")
'somefile.zip'
>>> os.path.basename("http://example.com/somedir/somefile.zip?foo=bar")
'somefile.zip?foo=bar'
Some other posters mentioned using urlparse, which will work, but you'd still need to strip the leading directory from the file name. If you use os.path.basename() then you don't have to worry about that, since it returns only the final part of the URL or file path.
I think that "the file name" isn't a very well defined concept when it comes to http transfers. The server might (but is not required to) provide one as "content-disposition" header, you can try to get that with remotefile.headers['Content-Disposition']. If this fails, you probably have to parse the URI yourself.
Just saw this I normally do..
filename = url.split("?")[0].split("/")[-1]
Using urlsplit is the safest option:
url = 'http://example.com/somefile.zip'
urlparse.urlsplit(url).path.split('/')[-1]
Do you mean urllib2.urlopen? There is no function called openfile in the urllib2 module.
Anyway, use the urllib2.urlparse functions:
>>> from urllib2 import urlparse
>>> print urlparse.urlsplit('http://example.com/somefile.zip')
('http', 'example.com', '/somefile.zip', '', '')
Voila.
You could also combine both of the two best-rated answers :
Using urllib2.urlparse.urlsplit() to get the path part of the URL, and then os.path.basename for the actual file name.
Full code would be :
>>> remotefile=urllib2.urlopen(url)
>>> try:
>>> filename=remotefile.info()['Content-Disposition']
>>> except KeyError:
>>> filename=os.path.basename(urllib2.urlparse.urlsplit(url).path)
The os.path.basename function works not only for file paths, but also for urls, so you don't have to manually parse the URL yourself. Also, it's important to note that you should use result.url instead of the original url in order to follow redirect responses:
import os
import urllib2
result = urllib2.urlopen(url)
real_url = urllib2.urlparse.urlparse(result.url)
filename = os.path.basename(real_url.path)
I guess it depends what you mean by parsing. There is no way to get the filename without parsing the URL, i.e. the remote server doesn't give you a filename. However, you don't have to do much yourself, there's the urlparse module:
In [9]: urlparse.urlparse('http://example.com/somefile.zip')
Out[9]: ('http', 'example.com', '/somefile.zip', '', '', '')
not that I know of.
but you can parse it easy enough like this:
url = 'http://example.com/somefile.zip'
print url.split('/')[-1]
using requests, but you can do it easy with urllib(2)
import requests
from urllib import unquote
from urlparse import urlparse
sample = requests.get(url)
if sample.status_code == 200:
#has_key not work here, and this help avoid problem with names
if filename == False:
if 'content-disposition' in sample.headers.keys():
filename = sample.headers['content-disposition'].split('filename=')[-1].replace('"','').replace(';','')
else:
filename = urlparse(sample.url).query.split('/')[-1].split('=')[-1].split('&')[-1]
if not filename:
if url.split('/')[-1] != '':
filename = sample.url.split('/')[-1].split('=')[-1].split('&')[-1]
filename = unquote(filename)
You probably can use simple regular expression here. Something like:
In [26]: import re
In [27]: pat = re.compile('.+[\/\?#=]([\w-]+\.[\w-]+(?:\.[\w-]+)?$)')
In [28]: test_set
['http://www.google.com/a341.tar.gz',
'http://www.google.com/a341.gz',
'http://www.google.com/asdasd/aadssd.gz',
'http://www.google.com/asdasd?aadssd.gz',
'http://www.google.com/asdasd#blah.gz',
'http://www.google.com/asdasd?filename=xxxbl.gz']
In [30]: for url in test_set:
....: match = pat.match(url)
....: if match and match.groups():
....: print(match.groups()[0])
....:
a341.tar.gz
a341.gz
aadssd.gz
aadssd.gz
blah.gz
xxxbl.gz
Using PurePosixPath which is not operating system—dependent and handles urls gracefully is the pythonic solution:
>>> from pathlib import PurePosixPath
>>> path = PurePosixPath('http://example.com/somefile.zip')
>>> path.name
'somefile.zip'
>>> path = PurePosixPath('http://example.com/nested/somefile.zip')
>>> path.name
'somefile.zip'
Notice how there is no network traffic here or anything (i.e. those urls don't go anywhere) - just using standard parsing rules.
import os,urllib2
resp = urllib2.urlopen('http://www.example.com/index.html')
my_url = resp.geturl()
os.path.split(my_url)[1]
# 'index.html'
This is not openfile, but maybe still helps :)

Categories