BeautifulSoup - Find link in this HTML - python

Here's my code to get html
from bs4 import BeautifulSoup
import urllib.request
from fake_useragent import UserAgent
url = "https://blahblah.com"
ua = UserAgent()
ran_header = ua.random
req = urllib.request.Request(url,data=None,headers={'User-Agent': ran_header})
uClient = urllib.request.urlopen(req)
page_html = uClient.read()
uClient.close()
html_source = BeautifulSoup(page_html, "html.parser")
results = html_source.findAll("a",{"onclick":"googleTag('click-listings-item-image');"})
From here results contains various listings containing different info. If I then print(results[0]):
<a href="https://blahblah.com//link//asdfqwersdf" onclick="googleTag('click-listings-item-image');">
<div class="results-panel-new col-sm-12">
<div class="row">
<div class="col-xs-12 col-sm-3 col-lg-2 text-center thumb-table-cell">
<span class="eq-table-new text-center"><img class="img-thumbnail" src="//images/120x90/7831a94157234bc6.jpg" /></span>
</div>
<div class="col-xs-12 hidden-sm hidden-md col-lg-1 text-center thumb-table-cell">
<span class="eq-table-new text-center"><span class="hidden-sm hidden-md hidden-lg">Year: </span>2000</span>
</div>
<div class="col-xs-12 hidden-sm hidden-md col-lg-2 text-center thumb-table-cell">
<span class="eq-table-new text-center">Fake City, USA</span>
</div>
<div class="col-xs-12 col-sm-3 col-lg-2 text-center thumb-table-cell">
<span class="eq-table-new text-center"><span class="hidden-sm hidden-md hidden-lg">Price: </span>$900</span>
</div>
</div>
<div class="row">
<div class="hidden-xs col-sm-12 table_details_new"><span>Descriptive details</span></div>
</div>
</div><!-- results-panel-new -->
</a>
I can get the image, Year, Location, and Price by doing a variation of this:
ModelYear = results[0].div.find("div",{"class":"col-xs-12 hidden-sm hidden-md col-lg-1 text-center thumb-table-cell"}).span.text
How do I get the very first href from results[0]?

Based on chat discussion, the href link looks available in simply: results[0]['href'].

You can use find_all( , href=True)
e.g:
results[0].find_all('a', href=True)[0]

Your selector is returning an a tag element as you can see shown in print out. So yes, you simply directly access the href with results[0]['href']. You can also tell this as the entire panel (the card displaying the listing) on the page is a clickable element. If you wanted to make this clearer you could change your selector for results to #js_thumb_view ~ a. This is also a faster selector.
results = html_source.select('#js_thumb_view ~ a')
Then all links, for example, with
links = [result['href'] for result in results]

Related

how to get specific links with BeautifulSoup?

I am trying to crawl HTML source with Python using BeautifulSoup.
I need to get the href of specific link <a> tags.
This is my test code. I want to get links <a href="/example/test/link/activity1~10"target="_blank">
<div class="listArea">
<div class="activity_sticky" id="activity">
.
.
</div>
<div class="activity_content activity_loaded">
<div class="activity-list-item activity_item__1fhpg">
<div class="activity-list-item_activity__3FmEX">
<div>...</div>
<a href="/example/test/link/activity1" target="_blank">
<div class="activity-list-item_addr">
<span> 0x1292311</span>
</div>
</a>
</div>
</div>
<div class="activity-list-item activity_item__1fhpg">
<div class="activity-list-item_activity__3FmEX">
<div>...</div>
<a href="/example/test/link/activity2" target="_blank">
<div class="activity-list-item_addr">
<span> 0x1292312</span>
</div>
</a>
</div>
</div>
.
.
.
</div>
</div>
Check the main page of the bs4 documentation:
for link in soup.find_all('a'):
print(link.get('href'))
This is a code for the problem. You should find the all <a></a>, then to getting the value of href.
soup = BeautifulSoup(html, 'html.parser')
for i in soup.find_all('a'):
if i['target'] == "_blank":
print(i['href'])
Hope my answer could help you.
Select the <a> specific - lternative to #Mason Ma answer you can also use css selectors:
soup.select('.activity_content a')]
or by its attribute target -
soup.select('.activity_content a[target="_blank"]')
Example
Will give you a list of links, matching your condition:
import requests
from bs4 import BeautifulSoup
html = '''
<div class="activity_content activity_loaded">
<div class="activity-list-item activity_item__1fhpg">
<div class="activity-list-item_activity__3FmEX">
<div>...</div>
<a href="/example/test/link/activity1" target="_blank">
<div class="activity-list-item_addr">
<span> 0x1292311</span>
</div>
</a>
</div>
</div>
<div class="activity-list-item activity_item__1fhpg">
<div class="activity-list-item_activity__3FmEX">
<div>...</div>
<a href="/example/test/link/activity2" target="_blank">
<div class="activity-list-item_addr">
<span> 0x1292312</span>
</div>
</a>
</div>
</div>
'''
soup = BeautifulSoup(html)
[x['href'] for x in soup.select('.activity_content a[target="_blank"]')]
Output
['/example/test/link/activity1', '/example/test/link/activity2']
Based on my understanding of your question, you're trying to extract the links (href) from anchor tags where the target value is _blank. You can do this by searching for all anchor tags then narrowing down to those whose target == '_blank'
links = soup.findAll('a', attrs = {'target' : '_blank'})
for link in links:
print(link.get('href'))

How to extract the data from encoded HTML class using python

How can I retrieve the page encoded div class of a webpage (title html tag) using Python?
Here my sample html code.
You need to use requests to make a request (it will automatically decode the page, in most cases), and beautifulsoup to extract the data from the HTML.
Update after OP clarifications. CSS classes are not dynamically updating, they're the same (that's what I noticed). Since they're the same, you can:
grab a container with all needed data (a container (CSS selector) that wraps needed data)
for result in soup.select(".pSzOP-AhqUyc-qWD73c.GNzUNc span"):
# ...
use regex to filter (find) all needed data via re.findall() and capture group (.*): only this match will be captured and returned. .*: means to capture everything.
if re.findall(r"^Telephone\s?:\s?(.*)", result.text):
# ...
Have a look at the SelectorGadget Chrome extension to grab CSS selectors by clicking on the desired element in your browser. On that note, there's a dedicated web scraping with CSS selectors blog post of mine.
Code and example in the online IDE:
import requests, re
from bs4 import BeautifulSoup
html = requests.get("https://sites.google.com/a/arden.solihull.sch.uk/futures/home")
soup = BeautifulSoup(html.text, "html.parser")
# all regular expressions for this task
# https://regex101.com/r/cxdxgq/1
for result in soup.select(".pSzOP-AhqUyc-qWD73c.GNzUNc span"):
if re.findall(r"^Careers\s?.*\s?:\s?(.*)", result.text):
name = "".join(re.findall(r"^Careers\s?.*\s?:\s?(.*)", result.text.strip()))
print(name)
if re.findall(r"^Telephone\s?:\s?(.*)", result.text):
telephone = "".join(re.findall(r"^Telephone\s?:\s?(.*)", result.text.strip()))
print(telephone)
if re.findall(r"^Email\s?:\s?(.*)", result.text):
email = "".join(re.findall(r"^Email\s?:\s?(.*)", result.text.strip()))
print(email)
# to scrape the role you can do the same thing with regex. Test on regex101.com
'''
Mrs A. Fallis
01564 773348
afallis#arden.solihull.sch.uk
Mr S. Brady
01564 7733478
sbrady#arden.solihull.sch.uk
'''
First solutions without OP clarifications (shows only extraction part since you haven't provided a website URL):
from bs4 import BeautifulSoup
html = """
<div class="L581yb VICjCf" hjdwnd-ahquyc-r6poud="" jndksc="" l6ctce-pszop"="" l6ctce-purzt="" tabindex=" == $0
<div class=">
</div>
<div class="hJDwNd-AhqUyc-WNfPc purZT-AhqUyC-I15mzb PSzOP-AhqUyc-qWD73c JNdks <div class=" jndksc-smkayb"="">
<div class="" f570id"="" jsaction="zXBUYD: ZTPCnb; 2QF9Uc: Qxe3nd;
jsname=" jscontroller="SGWD4d">
>
<div class="oKdM2C KzvoMe">
<div class="hJDwNd-AhqUyc-WNFPC PSzOP-AhqUyc- qWD73c jXK9ad D2fZ2 Oj CsFc whaque GNzUNC" id="h.7f5e93de0cf8a767_49">
<div class="]XK9ad-SmkAyb">
<div class="ty]Ctd mGzaTb baZpAe">
<div class="GV3q8e aP9Z7e" id="h.p_9livxd801krd">
</div>
<h3 class="CDt4ke zfr3Q OmQG5e" dir="ltr" id="h.p_9livxd801krd" tabindex="-1">
.
</h3>
<div class="GV3q8e aP9z7e" id="h.p JrEgQYpyORCF">
</div>
<h3 class="CDt 4Ke zfr3Q OmQG5e" dir="ltr" id="h.p_JrEgQYPYORCF" tabindex="-1">
<div class="CjVfdc" jsaction="touchstart:UrsOsc; click:Kjs
qPd; focusout:QZoaz; mouseover:yOpDld; mouseout:dq0hvd;fvlRjc:jbFSO
d;CrflRd:SzACGe;" jscontroller="Ae65rd">
<div class="PPHIP rviiZ" jsname="haAclf">
.
</div>
<span style="font-family: 'Oswald'; font-weight: 500;">
Telephone : 01564 773348
</span>
</div>
</h3>
<div class="GV3q8e aP9z7e" id="h.p_sylefz-BOSBX">
</div>
><h3 id="h.p_sylefz-BOSBX" dir="ltr" class="CDt 4Ke zfr3Q OmQG5e"
</div>
</div>
</div>
</div>
</div>
</div>
"""
# pass HTML to BeautifulSoup object and assign a html.parser as a HTML parser
soup = BeautifulSoup(html, "html.parser")
# grab a phone number (only first occurrence will be extracted)
# https://www.crummy.com/software/BeautifulSoup/bs4/doc/#css-selectors
print(soup.select_one('.CjVfdc span').text.strip())
# Telephone : 01564 773348
# extract <div> element with .L581yb class. returns a list()
print(soup.select('.L581yb'))
'''
[<div class="L581yb VICjCf" hjdwnd-ahquyc-r6poud="" jndksc="" l6ctce-pszop"="" l6ctce-purzt="" tabindex=" == $0
<div class=">
</div>]
'''
# extract <div> element with .hJDwNd-AhqUyc-WNfPc class. returns a list()
print(soup.select('.hJDwNd-AhqUyc-WNfPc'))
'''
[<div class="hJDwNd-AhqUyc-WNfPc purZT-AhqUyC-I15mzb PSzOP-AhqUyc-qWD73c JNdks <div class=" jndksc-smkayb"="">
<div class="" f570id"="" jsaction="zXBUYD: ZTPCnb; 2QF9Uc: Qxe3nd;
jsname=" jscontroller="SGWD4d">
>
<div class="oKdM2C KzvoMe">
<div class="hJDwNd-AhqUyc-WNFPC PSzOP-AhqUyc- qWD73c jXK9ad D2fZ2 Oj CsFc whaque GNzUNC" id="h.7f5e93de0cf8a767_49">
<div class="]XK9ad-SmkAyb">
<div class="ty]Ctd mGzaTb baZpAe">
<div class="GV3q8e aP9Z7e" id="h.p_9livxd801krd">
</div>
<h3 class="CDt4ke zfr3Q OmQG5e" dir="ltr" id="h.p_9livxd801krd" tabindex="-1">
.
</h3>
<div class="GV3q8e aP9z7e" id="h.p JrEgQYpyORCF">
</div>
<h3 class="CDt 4Ke zfr3Q OmQG5e" dir="ltr" id="h.p_JrEgQYPYORCF" tabindex="-1">
<div class="CjVfdc" jsaction="touchstart:UrsOsc; click:Kjs
qPd; focusout:QZoaz; mouseover:yOpDld; mouseout:dq0hvd;fvlRjc:jbFSO
d;CrflRd:SzACGe;" jscontroller="Ae65rd">
<div class="PPHIP rviiZ" jsname="haAclf">
.
</div>
<span style="font-family: 'Oswald'; font-weight: 500;">
Telephone : 01564 773348
</span>
</div>
</h3>
<div class="GV3q8e aP9z7e" id="h.p_sylefz-BOSBX">
</div>
><h3 id="h.p_sylefz-BOSBX" dir="ltr" class="CDt 4Ke zfr3Q OmQG5e"
</div>
</div>
</div>
</div>
</div>
</div>]
'''

How to get the text of the next tag? (Beautiful Soup)

The html code is :
<div class="card border p-3">
<span class="small text-muted">Contact<br></span>
<div>Steven Cantrell</div>
<div class="small">Department of Justice</div>
<div class="small">Federal Bureau of Investigation</div>
<!---->
<!---->
<!---->
<div class="small">skcantrell#fbi.gov</div>
<div class="small">256-313-8835</div>
</div>
I want to get the output inside the <div> tag i.e. Steven Cantrell .
I need such a way that I should be able to get the contents of next tag. In this case, it is 'span',{'class':'small text-muted'}
What I tried is :
rfq_name = soup.find('span',{'class':'small text-muted'})
print(rfq_name.next)
But this printed Contact instead of the name.
You're nearly there, just change your print to: print(rfq_name.find_next('div').text)
Find the element that has the text "Contact". Then use .find_next() to get the next <div> tag.
from bs4 import BeautifulSoup
html = '''<div class="card border p-3">
<span class="small text-muted">Contact<br></span>
<div>Steven Cantrell</div>
<div class="small">Department of Justice</div>
<div class="small">Federal Bureau of Investigation</div>
<!---->
<!---->
<!---->
<div class="small">skcantrell#fbi.gov</div>
<div class="small">256-313-8835</div>
</div>'''
soup = BeautifulSoup(html, 'html.parser')
contact = soup.find(text='Contact').find_next('div').text
Output:
print(contact)
Steven Cantrell

How to get parents of an element with xpath in selenium (python)

I have a script that takes all the images I want on a webpage, then I have to take the link that enclose the image.
I actually click on every image, take the current page link and then I go back and continue with the work. This is slow but I have an a tag that "hug" my image, I don't know how to retrieve that tag. With the tag it could be easier and faster. I attach the html code and my python code!
HTML code
<div class="col-xl col-lg col-md-4 col-sm-6 col-6">
<a href="URL I WANT TO GET ">
<article>
<span class="year">2017</span>
<span class="quality">4K</span>
<span class="imdb">6.7</span>
<img width="190" height="279" src="THE IMAGE URL" class="img-full wp-post-image" alt="" loading="lazy"> <h2>TITLE</h2>
</article>
</a></div>
<div class="col-xl col-lg col-md-4 col-sm-6 col-6">
<a href="URL I WANT TO GET 2">
<article>
<span class="year">2019</span>
<span class="quality">4K</span>
<span class="imdb">8.0</span>
<img width="190" height="279" src="THE IMAGE URL 2" class="img-full wp-post-image" alt="" loading="lazy"> <h2>TITLE</h2>
</article>
</a></div>
Python code
self.driver.get(category_url)
WebDriverWait(self.driver, 10).until(EC.visibility_of_element_located((By.CSS_SELECTOR, 'div.archivePaging'))) # a div to see if page is loaded
movies_buttons = self.driver.find_elements_by_css_selector('img.img-full.wp-post-image')
print("Getting all the links!")
for movie in movies_buttons:
self.driver.execute_script("arguments[0].scrollIntoView();", movie)
movie.click()
WebDriverWait(self.driver, 10).until(EC.visibility_of_element_located((By.CLASS_NAME, 'infoFilmSingle')))
print(self.driver.current_url)
self.driver.back()
WebDriverWait(self.driver, 10).until(EC.visibility_of_element_located((By.CSS_SELECTOR, 'div.archivePaging')))
Note that this code don't work now because i'm calling a movie object of an old page but that's not a problem because if i would just see the link i don't need to change page and so the session don't change.
An example based on what I understand you want to do - You wanna get the parent a tags href
Example
from selenium import webdriver
driver = webdriver.Chrome(executable_path=r'C:\Program Files\ChromeDriver\chromedriver.exe')
html_content = """
<div class="col-xl col-lg col-md-4 col-sm-6 col-6">
<a href="https://www.link1.de">
<article>
<span class="year">2017</span>
<span class="quality">4K</span>
<span class="imdb">6.7</span>
<img width="190" height="279" src="THE IMAGE URL" class="img-full wp-post-image" alt="" loading="lazy"> <h2>TITLE</h2>
</article>
</a>
</div>
<div class="col-xl col-lg col-md-4 col-sm-6 col-6">
<a href="https://www.link2.de">
<article>
<span class="year">2019</span>
<span class="quality">4K</span>
<span class="imdb">8.0</span>
<img width="190" height="279" src="THE IMAGE URL 2" class="img-full wp-post-image" alt="" loading="lazy"> <h2>TITLE</h2>
</article>
</a>
</div>
"""
driver.get("data:text/html;charset=utf-8,{html_content}".format(html_content=html_content))
Locate the image elements with its class and and walk up the element structur with .. in this case /../..
driver.get("data:text/html;charset=utf-8,{html_content}".format(html_content=html_content))
aTags = driver.find_elements_by_xpath("//img[contains(#class,'img-full wp-post-image')]/../..")
for ele in aTags:
x=ele.get_attribute('href')
print(x)
driver.close()
Output
https://www.link1.de/
https://www.link2.de/

Get href links from a tag

This is just one part of the HTML and there are multiple products on the page with the same HTML construction
I want all the href's for all the products on page
<div class="row product-layout-category product-layout-list">
<div class="product-col wow fadeIn animated" style="visibility: visible;">
<a href="the link I want" class="product-item">
<div class="product-item-image">
<img data-src="link to an image" alt="name of the product" title="name of the product" class="img-responsive lazy" src="link to an image">
</div>
<div class="product-item-desc">
<p><span><strong>brand</strong></span></p>
<p><span class="font-size-16">name of the product</span></p>
<p class="product-item-price>
<span>product price</span></p>
</div>
</a>
</div>
.
.
.
With this code I wrote I only get None printed a bunch of times
from bs4 import BeautifulSoup
import requests
url = 'link to the site'
response = requests.get(url)
page = response.content
soup = BeautifulSoup(page, 'html.parser')
##this includes the part that I gave you
items = soup.find('div', {'class': 'product-layout-category'})
allItems = items.find_all('a')
for n in allItems:
print(n.href)
How can I get it to print all the href's in there?
Looking at your HTML code, you can use CSS selector a.product-item. This will select all <a> tags with class="product-item":
from bs4 import BeautifulSoup
html_text = """
<div class="row product-layout-category product-layout-list">
<div class="product-col wow fadeIn animated" style="visibility: visible;">
<a href="the link I want" class="product-item">
<div class="product-item-image">
<img data-src="link to an image" alt="name of the product" title="name of the product" class="img-responsive lazy" src="link to an image">
</div>
<div class="product-item-desc">
<p><span><strong>brand</strong></span></p>
<p><span class="font-size-16">name of the product</span></p>
<p class="product-item-price>
<span>product price</span></p>
</div>
</a>
</div>
"""
soup = BeautifulSoup(html_text, "html.parser")
for link in soup.select("a.product-item"):
print(link.get("href")) # or link["href"]
Prints:
the link I want

Categories