I'm having a bit of trouble trying to save the links from a website into a list without repeating urls with same domain
Example:
www.python.org/download and www.python.org/about
should only save the first one (www.python.org/download) and not repeat it later
This is what i've got so far
from bs4 import BeautifulSoup
import requests
from urllib.parse import urlparse
url = "https://docs.python.org/3/library/urllib.request.html#module-urllib.request"
result = requests.get(url)
doc = BeautifulSoup(result.text, "html.parser")
atag = doc.find_all('a', href=True)
links = []
#below should be some kind of for loop
As a one-liner:
links = {nl for a in doc.find_all('a', href=True) if (nl := urlparse(a["href"]).netloc) != ""}
Explained:
links = set() # define empty set
for a in doc.find_all('a', href=True): # loop over every <a> element
nl = urlparse(a["href"]).netloc # get netloc from url
if nl:
links.add(nl) # add to set if exists
output:
{'www.w3.org', 'datatracker.ietf.org', 'www.python.org', 'requests.readthedocs.io', 'github.com', 'www.sphinx-doc.org'}
Related
I have the following piece of code which extracts all links from a page and puts them in a list (links=[]), which is then passed to the function filter_links() .
I wish to filter out any links that are not from the same domain as the starting link, aka the first link in the list. This is what I have:
import requests
from bs4 import BeautifulSoup
import re
start_url = "http://www.enzymebiosystems.org/"
r = requests.get(start_url)
html_content = r.text
soup = BeautifulSoup(html_content, features='lxml')
links = []
for tag in soup.find_all('a', href=True):
links.append(tag['href'])
def filter_links(links):
filtered_links = []
for link in links:
if link.startswith(links[0]):
filtered_links.append(link)
return filtered_links
print(filter_links(links))
I have used the built-in startswith function, but its filtering out everything except the starting url.
Eventually I want to pass several different start urls through this program, so I need a generic way of filtering urls that are within the same domain as the starting url.I think I could use regex but this function should work too?
Try this :
import requests
from bs4 import BeautifulSoup
import re
import tldextract
start_url = "http://www.enzymebiosystems.org/"
r = requests.get(start_url)
html_content = r.text
soup = BeautifulSoup(html_content, features='lxml')
links = []
for tag in soup.find_all('a', href=True):
links.append(tag['href'])
def filter_links(links):
ext = tldextract.extract(start_url)
domain = ext.domain
filtered_links = []
for link in links:
if domain in link:
filtered_links.append(link)
return filtered_links
print(filter_links(links))
Note :
You need to get that return statement out of the for loop. It is just returning the result after iterating over just one element and thus only the first item inside a list is only getting returned.
Use tldextract module to better extract the domain name from the urls. If you want to explicitly check whether the links starts with links[0], it's up to you.
Output :
['http://enzymebiosystems.org', 'http://enzymebiosystems.org/', 'http://enzymebiosystems.org/leadership/about/', 'http://enzymebiosystems.org/leadership/directors-advisors/', 'http://enzymebiosystems.org/leadership/mission-values/', 'http://enzymebiosystems.org/leadership/marketing-strategy/', 'http://enzymebiosystems.org/leadership/business-strategy/', 'http://enzymebiosystems.org/technology/research/', 'http://enzymebiosystems.org/technology/manufacturer/', 'http://enzymebiosystems.org/recent-developments/', 'http://enzymebiosystems.org/investors-media/presentations-downloads/', 'http://enzymebiosystems.org/investors-media/press-releases/', 'http://enzymebiosystems.org/contact-us/', 'http://enzymebiosystems.org/leadership/about', 'http://enzymebiosystems.org/leadership/about', 'http://enzymebiosystems.org/leadership/marketing-strategy', 'http://enzymebiosystems.org/leadership/marketing-strategy', 'http://enzymebiosystems.org/contact-us', 'http://enzymebiosystems.org/contact-us', 'http://enzymebiosystems.org/view-sec-filings/', 'http://enzymebiosystems.org/view-sec-filings/', 'http://enzymebiosystems.org/unregistered-sale-of-equity-securities/', 'http://enzymebiosystems.org/unregistered-sale-of-equity-securities/', 'http://enzymebiosystems.org/enzymebiosystems-files-sec-form-8-k-change-in-directors-or-principal-officers/', 'http://enzymebiosystems.org/enzymebiosystems-files-sec-form-8-k-change-in-directors-or-principal-officers/', 'http://enzymebiosystems.org/form-10-q-for-enzymebiosystems/', 'http://enzymebiosystems.org/form-10-q-for-enzymebiosystems/', 'http://enzymebiosystems.org/technology/research/', 'http://enzymebiosystems.org/investors-media/presentations-downloads/', 'http://enzymebiosystems.org', 'http://enzymebiosystems.org/leadership/about/', 'http://enzymebiosystems.org/leadership/directors-advisors/', 'http://enzymebiosystems.org/leadership/mission-values/', 'http://enzymebiosystems.org/leadership/marketing-strategy/', 'http://enzymebiosystems.org/leadership/business-strategy/', 'http://enzymebiosystems.org/technology/research/', 'http://enzymebiosystems.org/technology/manufacturer/', 'http://enzymebiosystems.org/investors-media/news/', 'http://enzymebiosystems.org/investors-media/investor-relations/', 'http://enzymebiosystems.org/investors-media/press-releases/', 'http://enzymebiosystems.org/investors-media/stock-information/', 'http://enzymebiosystems.org/investors-media/presentations-downloads/', 'http://enzymebiosystems.org/contact-us']
Okay so you made an indentation error in filter_links(links). The function should be like this
def filter_links(links):
filtered_links = []
for link in links:
if link.startswith(links[0]):
filtered_links.append(link)
return filtered_links
Notice that in your code, you kept the return statement inside the for loop so, the for loop gets executed once and then returns the list.
Hope this helps :)
Possible Solution
What about if you kept all links which 'contain' the domain?
For example
import pandas as pd
links = []
for tag in soup.find_all('a', href=True):
links.append(tag['href'])
all_links = pd.DataFrame(links, columns=["Links"])
enzyme_df = all_links[all_links.Links.str.contains("enzymebiosystems")]
# results in a dataframe with links containing "enzymebiosystems".
If you want to search multiple domains, see this answer
I've created a script in Python to fetch different product links from a webpage. Although I know the content of that site are dynamic, I tried conventional way to let you inform that I tried. I looked for APIs in the dev tools but could not find one. Ain't there any way to get those links using requests?
Site Link
I've written so far:
import requests
from bs4 import BeautifulSoup
link = "https://www.amazon.com/stores/node/10699640011"
def fetch_product_links(url):
res = requests.get(url,headers={"User-Agent":"Mozilla/5.0"})
soup = BeautifulSoup(res.text,"lxml")
for item_link in soup.select("[id^='ProductGrid-'] li[class^='style__itemOuter__'] > a"):
print(item_link.get("href"))
if __name__ == '__main__':
fetch_product_links(link)
How can I fetch different product links from that site using requests?
I think you only need the asins which you can collect from another url construct you can see in network tab i.e. you can significantly shorten the final urls. You do however need to make a request to your original url to pick up an identifier to use in second url. Returns 146 links.
import requests, re, json
node = '10699640011'
with requests.Session() as s:
r = s.get(f'https://www.amazon.com/stores/node/{node}')
p = re.compile(r'var slotsStr = "\[(.*?,){3} share\]";')
identifier = p.findall(r.text)[0]
identifier = identifier.strip()[:-1]
r = s.get(f'https://www.amazon.com/stores/slot/{identifier}?node={node}')
p = re.compile(r'var config = (.*?);')
data = json.loads(p.findall(r.text)[0])
asins = data['content']['ASINList']
links = [f'https://www.amazon.com/dp/{asin}' for asin in asins]
print(links)
EDIT:
With two given nodes:
import requests, re, json
from bs4 import BeautifulSoup as bs
nodes = ['3039806011','10699640011']
with requests.Session() as s:
for node in nodes:
r = s.get(f'https://www.amazon.com/stores/node/{node}')
soup = bs(r.content, 'lxml')
identifier = soup.select('.stores-widget-btf:not([id=share],[id*=RECOMMENDATION])')[-1]['id']
r = s.get(f'https://www.amazon.com/stores/slot/{identifier}?node={node}')
p = re.compile(r'var config = (.*?);')
data = json.loads(p.findall(r.text)[0])
asins = data['content']['ASINList']
links = [f'https://www.amazon.com/dp/{asin}' for asin in asins]
print(links)
I need to get all the Links related to give homepage URL of a website, that all links mean the link which are present in homepage plus the links which are new and are reached via using the link in the homepage links.
I am using the BeautifulSoup python library. I am also thinking to use Scrapy.
This Below code extracts Links only linked to homepage.
from bs4 import BeautifulSoup
import requests
url = "https://www.dataquest.io"
def links(url):
html = requests.get(url).content
bsObj = BeautifulSoup(html, 'lxml')
links = bsObj.findAll('a')
finalLinks = set()
for link in links:
finalLinks.add(link)
return finalLinks
print(links(url))
linklis = list(links(url))
for l in linklis:
print(l)
print("\n")
I need a List which include all URL/Links which can be reached via the homepage URL (may be directly or indirectly linked to homepage).
This script will print all links found on the url https://www.dataquest.io:
from bs4 import BeautifulSoup
import requests
url = "https://www.dataquest.io"
def links(url):
html = requests.get(url).content
bsObj = BeautifulSoup(html, 'lxml')
links = bsObj.select('a[href]')
final_links = set()
for link in links:
url_string = link['href'].rstrip('/')
if 'javascript:' in url_string or url_string.startswith('#'):
continue
elif 'http' not in url_string and not url_string.startswith('//'):
url_string = 'https://www.dataquest.io' + url_string
elif 'dataquest.io' not in url_string:
continue
final_links.add(url_string)
return final_links
for l in sorted( links(url) ):
print(l)
Prints:
http://app.dataquest.io/login
http://app.dataquest.io/signup
https://app.dataquest.io/signup
https://www.dataquest.io
https://www.dataquest.io/about-us
https://www.dataquest.io/blog
https://www.dataquest.io/blog/learn-data-science
https://www.dataquest.io/blog/learn-python-the-right-way
https://www.dataquest.io/blog/the-perfect-data-science-learning-tool
https://www.dataquest.io/blog/topics/student-stories
https://www.dataquest.io/chat
https://www.dataquest.io/course
https://www.dataquest.io/course/algorithms-and-data-structures
https://www.dataquest.io/course/apis-and-scraping
https://www.dataquest.io/course/building-a-data-pipeline
https://www.dataquest.io/course/calculus-for-machine-learning
https://www.dataquest.io/course/command-line-elements
https://www.dataquest.io/course/command-line-intermediate
https://www.dataquest.io/course/data-exploration
https://www.dataquest.io/course/data-structures-algorithms
https://www.dataquest.io/course/decision-trees
https://www.dataquest.io/course/deep-learning-fundamentals
https://www.dataquest.io/course/exploratory-data-visualization
https://www.dataquest.io/course/exploring-topics
https://www.dataquest.io/course/git-and-vcs
https://www.dataquest.io/course/improving-code-performance
https://www.dataquest.io/course/intermediate-r-programming
https://www.dataquest.io/course/intro-to-r
https://www.dataquest.io/course/kaggle-fundamentals
https://www.dataquest.io/course/linear-algebra-for-machine-learning
https://www.dataquest.io/course/linear-regression-for-machine-learning
https://www.dataquest.io/course/machine-learning-fundamentals
https://www.dataquest.io/course/machine-learning-intermediate
https://www.dataquest.io/course/machine-learning-project
https://www.dataquest.io/course/natural-language-processing
https://www.dataquest.io/course/optimizing-postgres-databases-data-engineering
https://www.dataquest.io/course/pandas-fundamentals
https://www.dataquest.io/course/pandas-large-datasets
https://www.dataquest.io/course/postgres-for-data-engineers
https://www.dataquest.io/course/probability-fundamentals
https://www.dataquest.io/course/probability-statistics-intermediate
https://www.dataquest.io/course/python-data-cleaning-advanced
https://www.dataquest.io/course/python-datacleaning
https://www.dataquest.io/course/python-for-data-science-fundamentals
https://www.dataquest.io/course/python-for-data-science-intermediate
https://www.dataquest.io/course/python-programming-advanced
https://www.dataquest.io/course/r-data-cleaning
https://www.dataquest.io/course/r-data-cleaning-advanced
https://www.dataquest.io/course/r-data-viz
https://www.dataquest.io/course/recursion-and-tree-structures
https://www.dataquest.io/course/spark-map-reduce
https://www.dataquest.io/course/sql-databases-advanced
https://www.dataquest.io/course/sql-fundamentals
https://www.dataquest.io/course/sql-fundamentals-r
https://www.dataquest.io/course/sql-intermediate-r
https://www.dataquest.io/course/sql-joins-relations
https://www.dataquest.io/course/statistics-fundamentals
https://www.dataquest.io/course/statistics-intermediate
https://www.dataquest.io/course/storytelling-data-visualization
https://www.dataquest.io/course/text-processing-cli
https://www.dataquest.io/directory
https://www.dataquest.io/forum
https://www.dataquest.io/help
https://www.dataquest.io/path/data-analyst
https://www.dataquest.io/path/data-analyst-r
https://www.dataquest.io/path/data-engineer
https://www.dataquest.io/path/data-scientist
https://www.dataquest.io/privacy
https://www.dataquest.io/subscribe
https://www.dataquest.io/terms
https://www.dataquest.io/were-hiring
https://www.dataquest.io/wp-content/uploads/2019/03/db.png
https://www.dataquest.io/wp-content/uploads/2019/03/home-code-1.jpg
https://www.dataquest.io/wp-content/uploads/2019/03/python.png
EDIT: Changed the selector to a[href]
EDIT2: A primitive recursive crawler:
def crawl(urls, seen=set()):
for url in urls:
if url not in seen:
print(url)
seen.add(url)
new_links = links(url)
crawl(urls.union(new_links), seen)
starting_links = links(url)
crawl(starting_links)
I try to parse https://www.drugbank.ca/drugs. The idea is to extract all the drug names and some additional informationfor each drug. As you can see each webpage represents a table with drug names and the when we hit the drugname we can access to this drug information.
Let's say I will keep the following code to handle the pagination:
import requests
from bs4 import BeautifulSoup
def drug_data():
url = 'https://www.drugbank.ca/drugs/'
while url:
print(url)
r = requests.get(url)
soup = BeautifulSoup(r.text ,"lxml")
#data = soup.select('name-head a')
#for link in data:
# href = 'https://www.drugbank.ca/drugs/' + link.get('href')
# pages_data(href)
# next page url
url = soup.findAll('a', {'class': 'page-link', 'rel': 'next'})
print(url)
if url:
url = 'https://www.drugbank.ca' + url[0].get('href')
else:
break
drug_data()
The issue is that in each page, and for each drug in the table of this page I need to capture :
Name.
Accession Number.
Structured Indications,
Generic Prescription Products,
I used the classical request/beautifusoup but can't go deep ..
Some Help please
Create function with requests and BeautifulSoup to get data from subpage
import requests
from bs4 import BeautifulSoup
def get_details(url):
print('details:', url)
# get subpage
r = requests.get(url)
soup = BeautifulSoup(r.text ,"lxml")
# get data on subpabe
dts = soup.findAll('dt')
dds = soup.findAll('dd')
# display details
for dt, dd in zip(dts, dds):
print(dt.text)
print(dd.text)
print('---')
print('---------------------------')
def drug_data():
url = 'https://www.drugbank.ca/drugs/'
while url:
print(url)
r = requests.get(url)
soup = BeautifulSoup(r.text ,"lxml")
# get links to subpages
links = soup.select('strong a')
for link in links:
# exeecute function to get subpage
get_details('https://www.drugbank.ca' + link['href'])
# next page url
url = soup.findAll('a', {'class': 'page-link', 'rel': 'next'})
print(url)
if url:
url = 'https://www.drugbank.ca' + url[0].get('href')
else:
break
drug_data()
To crawl effectively, you'll want to implement a few measures, such as maintaining a queue of urls to visit and be aware of what urls you have already visited.
Keeping in mind that links can be absolute or relative and that redirects are very likely, you also probably want to construct the urls dynamically rather than string concatenation.
Here is a generic (we usually only want to use example.com on SO) crawling workflow...
from urllib.parse import urljoin, urlparse # python
# from urlparse import urljoin, urlparse # legacy python2
import requests
from bs4 import BeautifulSoup
def process_page(soup):
'''data extraction process'''
pass
def is_external(link, base='example.com'):
'''determine if the link is external to base'''
site = urlparse(link).netloc
return base not in site
def resolve_link(current_location, href):
'''resolves final location of a link including redirects'''
req_loc = urljoin(current_location, href)
response = requests.head(req_loc)
resolved_location = response.url # location after redirects
# if you don't want to visit external links...
if is_external(resolved_location):
return None
return resolved_location
url_queue = ['https://example.com']
visited = set()
while url_queue:
url = url_queue.pop() # removes a url from the queue and assign it to `url`
response = requests.get(url)
current_location = response.url # final location after redirects
visited.add(url) # note that we've visited the given url
visited.add(current_location) # and the final location
soup = BeautifulSoup(response.text, 'lxml')
process_page(soup) # scrape the page
link_tags = soup.find_all('a') # gather additional links
for anchor in link_tags:
href = anchor.get('href')
link_location = resolve_link(current_location, href)
if link_location and link_location not in visited:
url_queue.append(link_location)
A am learning how to use BeautifulSoup and I have run into an issue with double printing in a loop I have written.
Any insight would be greatly appreciated!
from bs4 import BeautifulSoup
import requests
import re
page = 'https://news.google.com/news/headlines?gl=US&ned=us&hl=en' #main page
#url = raw_input("Enter a website to extract the URL's from: ")
r = requests.get(page) #requests html document
data = r.text #set data = to html text
soup = BeautifulSoup(data, "html.parser") #parse data with BS
for link in soup.find_all('a'):
#if contains /news/
if ('/news/' in link.get('href')):
print(link.get('href'))
Examples:
for link in soup.find_all('a'):
#if contains cointelegraph/news/
#if ('https://cointelegraph.com/news/' in link.get('href')):
url = link.get('href') #local var store url
if '/news/' in url:
print(url)
print(count)
count += 1
if count == 5:
break
output:
https://cointelegraph.com/news/woman-in-denmark-imprisoned-for-hiring-hitman-using-bitcoin
0
https://cointelegraph.com/news/ethereum-price-hits-all-time-high-of-750-following-speed-boost
1
https://cointelegraph.com/news/ethereum-price-hits-all-time-high-of-750-following-speed-boost
2
https://cointelegraph.com/news/senior-vp-says-ebay-seriously-considering-bitcoin-integration
3
https://cointelegraph.com/news/senior-vp-says-ebay-seriously-considering-bitcoin-integration
4
For some reason my code keeps printing out the same url twice...
Based on your code and the provided link there seems to be duplicates in the results of BeautifulSoup find_all search. The html structure needs to be checked to see why duplicates are returned (check the find_all search options to filter some in the documentation. But if you want a quick fix and want to remove the duplicates from the printed results you can use the modified loop with a set as below to keep track of seen entries (based on this).
In [78]: l = [link.get('href') for link in soup.find_all('a') if '/news/' in link.get('href')]
In [79]: any(l.count(x) > 1 for x in l)
Out[79]: True
The above output shows duplicate exists in the list. Now to remove them use something like
seen = set()
for link in soup.find_all('a'):
lhref = link.get('href')
if '/news/' in lhref and lhref not in seen:
print lhref
seen.add(lhref)