I am very much new to Python and Scrapy, but when I tried to iterate nested html elements, it is not producing desired result.
Below is the HTML, i am trying to scrap.
<div class="level1" role="main">
<div class="level2">
<h1 id="fullStoreHeading" class="class_h1">Page Title</h1>
<div class="fsdColumn_3">
<div class='fsdDeptBox'>
<img alt="" src="" aria-hidden="true" height="100%" width="100%">
<h2 class="fsdDeptTitle">TV</h2>
<div class='fsdDeptCol'>
<a class="class_a" href="/test?_encoding=UTF8&id=1001">Samsung</a>
<a class="class_a" href="/test?_encoding=UTF8&id=1002">Vizio</a>
<a class="class_a" href="/test?_encoding=UTF8&id=1003">Element</a>
</div>
</div>
<div class='fsdDeptBox'>
<img alt="" src="" aria-hidden="true" height="100%" width="100%">
<h2 class="fsdDeptTitle">Laptop</h2>
<div class='fsdDeptCol'>
<a class="class_a" href="/test?_encoding=UTF8&id=1004">Apple</a>
<a class="class_a" href="/test?_encoding=UTF8&id=1005">Microsoft</a>
<a class="class_a" href="/test?_encoding=UTF8&id=1006">Dell</a>
</div>
</div>
</div>
<div class="fsdColumn_3">
<div class='fsdDeptBox'>
<img alt="" src="" aria-hidden="true" height="100%" width="100%">
<h2 class="fsdDeptTitle">Video Game Console</h2>
<div class='fsdDeptCol'>
<a class="class_a" href="/test?_encoding=UTF8&id=1007">Xbox One</a>
<a class="class_a" href="/test?_encoding=UTF8&id=1008">Xbox 360</a>
<a class="class_a" href="/test?_encoding=UTF8&id=1009">PS 5</a>
</div>
</div>
<div class='fsdDeptBox'>
<img alt="" src="" aria-hidden="true" height="100%" width="100%">
<h2 class="fsdDeptTitle">SSD</h2>
<div class='fsdDeptCol'>
<a class="class_a" href="/test?_encoding=UTF8&id=1010">Samsung Evo</a>
<a class="class_a" href="/test?_encoding=UTF8&id=1011">Crucial</a>
<a class="class_a" href="/test?_encoding=UTF8&id=1012">Sandisk</a>
</div>
</div>
</div>
</div>
The output I am trying to generate from the above html is a list of:
Product Category -> Brand -> Id
E.g.
TV
Samsung 1001
Vizio 1002
Element 1003
Laptop
Apple 1004
Microsoft 1005
Dell 1006
Video Game Console
Xbox Onen 1007
Xbox 360 1008
PS4 1009
ProductCategories.py
def parse(self, response):
l = ItemLoader(item=ProductSpiderItem(), response=response)
titles = response.xpath('//*[#class="fsdDeptTitle"]')
for title in titles:
Product_Category= title.xpath('text()').extract()
l.add_value('Product_Category', Product_Category)
for brnd in
title.xpath('//*[#class="fsdDeptCol"]/a[#class="class_a"]'):
Brand = brnd.xpath('text()').extract()
l.add_value('Brand', Brand)
return l.load_item()
At this moment it is printing all the product categories from "Outer For Loop" once and the "Inner For Loop" is printing all the brands irrespective of the product categories and the "Inner For Loop" prints all the brands whenever the "Outer For Loop" runs.
I would really appreciate any help to resolve the issue.
Thanks a lot.
Your first 'for' loop sends it to iterate through the <h2 class="fsdDeptTitle">SSD</h2> part of the HTML. Then what you're trying to do is look within that code to find class=class_a. It can't do that because the first 'for' loop is too specific to also select the HTML where 'class_a' is.
You can fix this by having your 'for' loops look one level higher in the HTML.
titles = response.xpath("//*[#class='fsdDeptBox']")
for title in titles:
Product_Category=title.xpath('text()').extract()
l.add_value('Product_Category', Product_Category)
for brnd in title.xpath('div[#class="fsdDeptCol"]'):
Brand = brnd.xpath('*/text()').extract()
l.add_value('Brand', Brand)
return l.Load_item()
I changed the first 'for' loop to select enough of the HTML to include a path to the 'class_a' text
Side note. I don't know much about the correct HTML terms but I hope this still made sense.
I think you should check a bit more how ItemLoaders work. They also depend on how your items and item loaders are defined, for example let's assume you've defined like this:
class ProductItem(Item):
category = Field()
brand = Field()
class ProductItemLoader(ItemLoader):
default_item_class = ProductItem
default_output_processor = TakeFirst()
then you could do something like this:
for product in response.css('.fsdDeptCol a'):
il = ProductItemLoader(selector=product)
il.add_xpath('category', './ancestor::*/preceding-sibling::h2/text()')
il.add_xpath('brand', './text()')
yield il.load_item()
Related
How can I retrieve the page encoded div class of a webpage (title html tag) using Python?
Here my sample html code.
You need to use requests to make a request (it will automatically decode the page, in most cases), and beautifulsoup to extract the data from the HTML.
Update after OP clarifications. CSS classes are not dynamically updating, they're the same (that's what I noticed). Since they're the same, you can:
grab a container with all needed data (a container (CSS selector) that wraps needed data)
for result in soup.select(".pSzOP-AhqUyc-qWD73c.GNzUNc span"):
# ...
use regex to filter (find) all needed data via re.findall() and capture group (.*): only this match will be captured and returned. .*: means to capture everything.
if re.findall(r"^Telephone\s?:\s?(.*)", result.text):
# ...
Have a look at the SelectorGadget Chrome extension to grab CSS selectors by clicking on the desired element in your browser. On that note, there's a dedicated web scraping with CSS selectors blog post of mine.
Code and example in the online IDE:
import requests, re
from bs4 import BeautifulSoup
html = requests.get("https://sites.google.com/a/arden.solihull.sch.uk/futures/home")
soup = BeautifulSoup(html.text, "html.parser")
# all regular expressions for this task
# https://regex101.com/r/cxdxgq/1
for result in soup.select(".pSzOP-AhqUyc-qWD73c.GNzUNc span"):
if re.findall(r"^Careers\s?.*\s?:\s?(.*)", result.text):
name = "".join(re.findall(r"^Careers\s?.*\s?:\s?(.*)", result.text.strip()))
print(name)
if re.findall(r"^Telephone\s?:\s?(.*)", result.text):
telephone = "".join(re.findall(r"^Telephone\s?:\s?(.*)", result.text.strip()))
print(telephone)
if re.findall(r"^Email\s?:\s?(.*)", result.text):
email = "".join(re.findall(r"^Email\s?:\s?(.*)", result.text.strip()))
print(email)
# to scrape the role you can do the same thing with regex. Test on regex101.com
'''
Mrs A. Fallis
01564 773348
afallis#arden.solihull.sch.uk
Mr S. Brady
01564 7733478
sbrady#arden.solihull.sch.uk
'''
First solutions without OP clarifications (shows only extraction part since you haven't provided a website URL):
from bs4 import BeautifulSoup
html = """
<div class="L581yb VICjCf" hjdwnd-ahquyc-r6poud="" jndksc="" l6ctce-pszop"="" l6ctce-purzt="" tabindex=" == $0
<div class=">
</div>
<div class="hJDwNd-AhqUyc-WNfPc purZT-AhqUyC-I15mzb PSzOP-AhqUyc-qWD73c JNdks <div class=" jndksc-smkayb"="">
<div class="" f570id"="" jsaction="zXBUYD: ZTPCnb; 2QF9Uc: Qxe3nd;
jsname=" jscontroller="SGWD4d">
>
<div class="oKdM2C KzvoMe">
<div class="hJDwNd-AhqUyc-WNFPC PSzOP-AhqUyc- qWD73c jXK9ad D2fZ2 Oj CsFc whaque GNzUNC" id="h.7f5e93de0cf8a767_49">
<div class="]XK9ad-SmkAyb">
<div class="ty]Ctd mGzaTb baZpAe">
<div class="GV3q8e aP9Z7e" id="h.p_9livxd801krd">
</div>
<h3 class="CDt4ke zfr3Q OmQG5e" dir="ltr" id="h.p_9livxd801krd" tabindex="-1">
.
</h3>
<div class="GV3q8e aP9z7e" id="h.p JrEgQYpyORCF">
</div>
<h3 class="CDt 4Ke zfr3Q OmQG5e" dir="ltr" id="h.p_JrEgQYPYORCF" tabindex="-1">
<div class="CjVfdc" jsaction="touchstart:UrsOsc; click:Kjs
qPd; focusout:QZoaz; mouseover:yOpDld; mouseout:dq0hvd;fvlRjc:jbFSO
d;CrflRd:SzACGe;" jscontroller="Ae65rd">
<div class="PPHIP rviiZ" jsname="haAclf">
.
</div>
<span style="font-family: 'Oswald'; font-weight: 500;">
Telephone : 01564 773348
</span>
</div>
</h3>
<div class="GV3q8e aP9z7e" id="h.p_sylefz-BOSBX">
</div>
><h3 id="h.p_sylefz-BOSBX" dir="ltr" class="CDt 4Ke zfr3Q OmQG5e"
</div>
</div>
</div>
</div>
</div>
</div>
"""
# pass HTML to BeautifulSoup object and assign a html.parser as a HTML parser
soup = BeautifulSoup(html, "html.parser")
# grab a phone number (only first occurrence will be extracted)
# https://www.crummy.com/software/BeautifulSoup/bs4/doc/#css-selectors
print(soup.select_one('.CjVfdc span').text.strip())
# Telephone : 01564 773348
# extract <div> element with .L581yb class. returns a list()
print(soup.select('.L581yb'))
'''
[<div class="L581yb VICjCf" hjdwnd-ahquyc-r6poud="" jndksc="" l6ctce-pszop"="" l6ctce-purzt="" tabindex=" == $0
<div class=">
</div>]
'''
# extract <div> element with .hJDwNd-AhqUyc-WNfPc class. returns a list()
print(soup.select('.hJDwNd-AhqUyc-WNfPc'))
'''
[<div class="hJDwNd-AhqUyc-WNfPc purZT-AhqUyC-I15mzb PSzOP-AhqUyc-qWD73c JNdks <div class=" jndksc-smkayb"="">
<div class="" f570id"="" jsaction="zXBUYD: ZTPCnb; 2QF9Uc: Qxe3nd;
jsname=" jscontroller="SGWD4d">
>
<div class="oKdM2C KzvoMe">
<div class="hJDwNd-AhqUyc-WNFPC PSzOP-AhqUyc- qWD73c jXK9ad D2fZ2 Oj CsFc whaque GNzUNC" id="h.7f5e93de0cf8a767_49">
<div class="]XK9ad-SmkAyb">
<div class="ty]Ctd mGzaTb baZpAe">
<div class="GV3q8e aP9Z7e" id="h.p_9livxd801krd">
</div>
<h3 class="CDt4ke zfr3Q OmQG5e" dir="ltr" id="h.p_9livxd801krd" tabindex="-1">
.
</h3>
<div class="GV3q8e aP9z7e" id="h.p JrEgQYpyORCF">
</div>
<h3 class="CDt 4Ke zfr3Q OmQG5e" dir="ltr" id="h.p_JrEgQYPYORCF" tabindex="-1">
<div class="CjVfdc" jsaction="touchstart:UrsOsc; click:Kjs
qPd; focusout:QZoaz; mouseover:yOpDld; mouseout:dq0hvd;fvlRjc:jbFSO
d;CrflRd:SzACGe;" jscontroller="Ae65rd">
<div class="PPHIP rviiZ" jsname="haAclf">
.
</div>
<span style="font-family: 'Oswald'; font-weight: 500;">
Telephone : 01564 773348
</span>
</div>
</h3>
<div class="GV3q8e aP9z7e" id="h.p_sylefz-BOSBX">
</div>
><h3 id="h.p_sylefz-BOSBX" dir="ltr" class="CDt 4Ke zfr3Q OmQG5e"
</div>
</div>
</div>
</div>
</div>
</div>]
'''
I have a script that takes all the images I want on a webpage, then I have to take the link that enclose the image.
I actually click on every image, take the current page link and then I go back and continue with the work. This is slow but I have an a tag that "hug" my image, I don't know how to retrieve that tag. With the tag it could be easier and faster. I attach the html code and my python code!
HTML code
<div class="col-xl col-lg col-md-4 col-sm-6 col-6">
<a href="URL I WANT TO GET ">
<article>
<span class="year">2017</span>
<span class="quality">4K</span>
<span class="imdb">6.7</span>
<img width="190" height="279" src="THE IMAGE URL" class="img-full wp-post-image" alt="" loading="lazy"> <h2>TITLE</h2>
</article>
</a></div>
<div class="col-xl col-lg col-md-4 col-sm-6 col-6">
<a href="URL I WANT TO GET 2">
<article>
<span class="year">2019</span>
<span class="quality">4K</span>
<span class="imdb">8.0</span>
<img width="190" height="279" src="THE IMAGE URL 2" class="img-full wp-post-image" alt="" loading="lazy"> <h2>TITLE</h2>
</article>
</a></div>
Python code
self.driver.get(category_url)
WebDriverWait(self.driver, 10).until(EC.visibility_of_element_located((By.CSS_SELECTOR, 'div.archivePaging'))) # a div to see if page is loaded
movies_buttons = self.driver.find_elements_by_css_selector('img.img-full.wp-post-image')
print("Getting all the links!")
for movie in movies_buttons:
self.driver.execute_script("arguments[0].scrollIntoView();", movie)
movie.click()
WebDriverWait(self.driver, 10).until(EC.visibility_of_element_located((By.CLASS_NAME, 'infoFilmSingle')))
print(self.driver.current_url)
self.driver.back()
WebDriverWait(self.driver, 10).until(EC.visibility_of_element_located((By.CSS_SELECTOR, 'div.archivePaging')))
Note that this code don't work now because i'm calling a movie object of an old page but that's not a problem because if i would just see the link i don't need to change page and so the session don't change.
An example based on what I understand you want to do - You wanna get the parent a tags href
Example
from selenium import webdriver
driver = webdriver.Chrome(executable_path=r'C:\Program Files\ChromeDriver\chromedriver.exe')
html_content = """
<div class="col-xl col-lg col-md-4 col-sm-6 col-6">
<a href="https://www.link1.de">
<article>
<span class="year">2017</span>
<span class="quality">4K</span>
<span class="imdb">6.7</span>
<img width="190" height="279" src="THE IMAGE URL" class="img-full wp-post-image" alt="" loading="lazy"> <h2>TITLE</h2>
</article>
</a>
</div>
<div class="col-xl col-lg col-md-4 col-sm-6 col-6">
<a href="https://www.link2.de">
<article>
<span class="year">2019</span>
<span class="quality">4K</span>
<span class="imdb">8.0</span>
<img width="190" height="279" src="THE IMAGE URL 2" class="img-full wp-post-image" alt="" loading="lazy"> <h2>TITLE</h2>
</article>
</a>
</div>
"""
driver.get("data:text/html;charset=utf-8,{html_content}".format(html_content=html_content))
Locate the image elements with its class and and walk up the element structur with .. in this case /../..
driver.get("data:text/html;charset=utf-8,{html_content}".format(html_content=html_content))
aTags = driver.find_elements_by_xpath("//img[contains(#class,'img-full wp-post-image')]/../..")
for ele in aTags:
x=ele.get_attribute('href')
print(x)
driver.close()
Output
https://www.link1.de/
https://www.link2.de/
HTML:
<div id="searchResult">
<div class="buySearchResultContent">
<div class="buySearchResultContentImg">
<a href="carinfo-333285.php">
<img src="carpics/9400180056/290x200/20180305101502854_4567823.jpg" srcset="carpics/9400180056/290x200/20180305101502854_9098765.jpg 290w, carpics/9400180056/435x300/20180305101502854_00000.jpg 435w , carpics/9400180056/720x520/20180305101502854_00001.jpg 720w" sizes="(min-width: 992px) 75vw, 90vw" alt="auto">
</a>
</div>
<div class="buySearchResultContentImg">
<a href="carinfo-333286.php">
<img src="carpics/9400180056/290x200/20180305101502854_4567824.jpg" srcset="carpics/9400180056/290x200/20180305101502854_9098766.jpg 290w, carpics/9400180056/435x300/20180305101502854_00000.jpg 436w , carpics/9400180056/720x520/20180305101502854_00001.jpg 721w" sizes="(min-width: 992px) 75vw, 90vw" alt="auto">
</a>
</div>
</div>
</div>
What I am trying to do is extract two hrefs, but with my code, I can only extract the first one.
Code:
driver.find_element_by_css_selector("buySearchResultContentImg > div").get_attribute("href")
Try below code to get list of #href values:
links = [link.get_attribute("href") for link in driver.find_elements_by_css_selector(".buySearchResultContentImg>a")]
I've been building a web scraper in BS4 and have gotten stuck. I am using Trip Advisor as a test for other data I will be going after, but am not able to isolate the tag of the 'entire' reviews. Here is an example:
https://www.tripadvisor.com/Restaurant_Review-g56010-d470148-Reviews-Chez_Nous-Humble_Texas.html
Notice in the first review, there is an icon below "the wine list is...". I am able to easily isolate the partial reviews, but have not been able to figure out a way to get BS4 to pull the reviews after a simulated 'More' click. I'm trying to figure out what tool(s) are needed for this? Do I need to use selenium instead?
The original element looks like this:
<span class="partnerRvw">
<span class="taLnk hvrIE6 tr475091998 moreLink ulBlueLinks" onclick=" ta.util.cookie.setPIDCookie(4444); ta.call('ta.servlet.Reviews.expandReviews', {type: 'dummy'}, ta.id('review_475091998'), 'review_475091998', '1', 4444);
">
More </span>
<span class="ui_icon caret-down"></span>
</span>
Looking at the HTML after you click on the More link you would find a new dynamically added class that has a with the information I need (see below):
<div class="review dyn_full_review inlineReviewUpdate provider0 first newFlag" style="display: block;">
<a name="UR475091998" class=""></a>
<div id="UR475091998" class="extended provider0 first newFlag">
<div class="col1of2">
<div class="member_info">
<div id="UID_6875524F623CC948F4F9CA95BB4A9567-SRC_475091998" class="memberOverlayLink" onmouseover="requireCallIfReady('members/memberOverlay', 'initMemberOverlay', event, this, this.id, 'Reviews', 'user_name_photo');" data-anchorwidth="90">
<div class="avatar profile_6875524F623CC948F4F9CA95BB4A9567 ">
<a onclick="">
<img src="https://media-cdn.tripadvisor.com/media/photo-l/0d/97/43/bf/joannecarpenter.jpg" class="avatar potentialFacebookAvatar avatarGUID:6875524F623CC948F4F9CA95BB4A9567" width="74" height="74">
</a>
</div>
<div class="username mo">
<span class="expand_inline scrname mbrName_6875524F623CC948F4F9CA95BB4A9567" onclick="ta.trackEventOnPage('Reviews', 'show_reviewer_info_window', 'user_name_name_click')">joannecarpenter</span>
</div>
</div>
<div class="location">
Humble, Texas
</div>
</div>
<div class="memberBadging g10n">
<div id="UID_6875524F623CC948F4F9CA95BB4A9567-CONT" class="no_cpu" onclick="ta.util.cookie.setPIDCookie('15984'); requireCallIfReady('members/memberOverlay', 'initMemberOverlay', event, this, this.id, 'Reviews', 'review_count');" data-anchorwidth="90">
<div class="levelBadge badge lvl_02">
Level <span><img src="https://static.tacdn.com/img2/badges/20px/lvl_02.png" alt="" class="icon" width="20" height="20/"></span> Contributor </div>
<div class="reviewerBadge badge">
<img src="https://static.tacdn.com/img2/badges/20px/rev_03.png" alt="" class="icon" width="20" height="20">
<span class="badgeText">6 reviews</span> </div>
<div class="contributionReviewBadge badge">
<img src="https://static.tacdn.com/img2/badges/20px/Foodie.png" alt="" class="icon" width="20" height="20">
<span class="badgeText">6 restaurant reviews</span>
</div>
</div>
</div>
</div>
<div class="col2of2">
<div class="innerBubble">
<div class="quote">“<span class="noQuotes">Dinner</span>”</div>
<div class="rating reviewItemInline">
<span class="rate sprite-rating_s rating_s"> <img class="sprite-rating_s_fill rating_s_fill s50" width="70" src="https://static.tacdn.com/img2/x.gif" alt="5 of 5 bubbles">
</span>
<span class="ratingDate relativeDate" title="April 12, 2017">Reviewed 3 days ago
<span class="new redesigned">NEW</span> </span>
<a class="viaMobile" href="/apps" target="_blank" onclick="ta.util.cookie.setPIDCookie(24687)">
<span class="ui_icon mobile-phone"></span>
via mobile
</a>
</div>
<div class="entry">
<p>
Our favorite restaurant in Houston. Definitely the best and friendliest service! The food is not only served with a flair, it is absolutely delicious. My favorite is the Lamb. It is the best! Also the duck moose, fois gras, the crispy salad and the French onion soup are all spectacular! This is a must try restaurant! The wine list is fantastic. Just ask Daniel for suggestions. He not only knows his wines; he loves what he does! We Love this place!
</p>
</div>
<div class="rating-list">
<div class="recommend">
<span class="recommend-titleInline noRatings">Visited April 2017</span>
</div>
</div>
<div class="expanded lessLink">
<span class="taLnk collapse ulBlueLinks no_cpu ">
Less
</span>
<span class="textArrow_more ui_icon caret-up"></span>
</div>
<div id="helpfulq475091998_expanded" class="helpful redesigned white_btn_container ">
<span class="isHelpful">Helpful?</span> <div class="tgt_helpfulq475091998 rnd_white_thank_btn" onclick="ta.call('ta.servlet.Reviews.helpfulVoteHandlerOb', event, this, 'LeJIVqd4EVIpECri1GII2t6mbqgqguuuxizSxiniaqgeVtIJpEJCIQQoqnQQeVsSVuqHyo3KUKqHMdkKUdvqHxfqHfGVzCQQoqnQQZiptqH5paHcVQQoqnQQrVxEJtxiGIac6XoXmqoTpcdkoKAUAAv0tEn1dkoKAUAAv0zH1o3KUK0pSM13vkooXdqn3XmffAdvqndqnAfbAo77dbAo3k0npEEeJIV1K0EJIVqiJcpV1U0Ii9VC1rZlU3XozxbZZxE2crHN2TDUJiqnkiuzsVEOxdkXqi7TxXpUgyR2xXvOfROwaqILkrzz9MvzCxMva7xEkq8xXNq8ymxbAq8AzzrhhzCxbx2vdNvEn2fnwEfq8alzCeqi53ZrgnMrHhshTtowGpNSmq89IwiVb7crUJxdevaCnJEqI33qiE5JGErJExXKx5ooItGCy5wnCTx2VA7RvxEsO3'); ta.trackEventOnPage('HELPFUL_VOTE_TEST', 'helpfulvotegiven_v2');">
<img src="https://static.tacdn.com/img2/icons/icon_thumb_white.png" class="helpful_thumbs_up white">
<img src="https://static.tacdn.com/img2/icons/icon_thumb_green.png" class="helpful_thumbs_up green">
<span class="helpful_text">Thank joannecarpenter</span> </div>
</div>
<div class="tooltips vertically_centered">
<div class="reportProblem">
<span id="ReportIAP_475091998" class="problem collapsed taLnk" onclick="ta.trackEventOnPage('Report_IAP', 'Report_Button_Clicked', 'member'); ta.call('ta.servlet.Reviews.iapFlyout', event, this, '475091998')" onmouseover="if (!this.getAttribute('data-first')) {ta.trackEventOnPage('Reviews', 'report_problem', 'hover_over_flag'); this.setAttribute('data-first', 1)} uiOverlay(event, this)" data-tooltip="" data-position="above" data-content="Problem with this review?">
<img src="https://static.tacdn.com/img2/icons/gray_flag.png" width="13" height="14" alt="">
<span class="reportTxt">Report</span> </span>
</div>
</div>
<div class="userLinks">
<div class="sameGeoActivity">
<a href="/members-citypage/joannecarpenter/g56010" target="_blank" onclick="ta.setEvtCookie('Reviews','more_reviews_by_user','',0,this.href); ta.util.cookie.setPIDCookie(19160)">
See all 5 reviews by joannecarpenter for Humble </a>
</div>
<div class="askQuestion">
<span class="taLnk ulBlueLinks" onclick="ta.trackEventOnPage('answers_review','ask_user_intercept_click' ); ta.load('ta-answers', (function() {require('answers/misc').askReviewerIntercept(this, '470148', 'joannecarpenter', '6875524F623CC948F4F9CA95BB4A9567', 'en', '475091998','Chez Nous', 39151)}).bind(this), true);">Ask joannecarpenter about Chez Nous</span>
</div>
</div>
<div class="note">
This review is the subjective opinion of a TripAdvisor member and not of TripAdvisor LLC. </div>
<div class="duplicateReviewsInline">
<div class="previous">joannecarpenter has 1 more review of Chez Nous</div> <ul class="dupReviews">
<li class="dupReviewItem">
<div class="reviewTitle">
“Joanne Carpenter”
</div>
<div class="rating">
<span class="rate sprite-rating_ss rating_ss"> <img class="sprite-rating_ss_fill rating_ss_fill ss50" width="50" src="https://static.tacdn.com/img2/x.gif" alt="5 of 5 bubbles">
</span>
<span class="date">Reviewed January 18, 2017</span>
</div>
</li>
</ul>
</div>
</div>
</div>
</div>
<div class="large">
</div>
<div class="ad iab_inlineBanner">
<div id="gpt-ad-468x60" class="adInner gptAd"></div>
</div>
</div>
Is there a way for BS4 to handle this for me?
Here's a simple example to get you started:
import selenium
from selenium import webdriver
driver = webdriver.PhantomJS()
url = "https://www.tripadvisor.com/Restaurant_Review-g56010-d470148-Reviews-Chez_Nous-Humble_Texas.html"
driver.get(url)
elem = driver.get_element_by_class_name("taLnk")
...
You could find more info about the methods here:
http://selenium-python.readthedocs.io/
In all likelihood you will need to examine a few more of these pages, to identify variations in the HTML code. For the sample you have offered, and given that you are able to obtain it by simulating a press, the following code works to select the paragraph that you seem to want.
from bs4 import BeautifulSoup
HTML = open('temp.htm').read()
soup = BeautifulSoup(HTML, 'lxml')
para = soup.select('.entry > p')
print (para[0].text)
Result:
Our favorite restaurant in Houston. Definitely the best and friendliest service! The food is not only served with a flair, it is absolutely delicious. My favorite is the Lamb. It is the best! Also the duck moose, fois gras, the crispy salad and the French onion soup are all spectacular! This is a must try restaurant! The wine list is fantastic. Just ask Daniel for suggestions. He not only knows his wines; he loves what he does! We Love this place!
Note that there are newlines before and after the paragraph.
This question already has answers here:
retrieve links from web page using python and BeautifulSoup [closed]
(16 answers)
Closed 7 years ago.
I am new to beautiful soup and am trying to figure out how to pull a website from a nested array. The website can be found twice under the "track-visit-website" class.
This is NOT a duplicate of the question asking about how to pull hrefs. I've done that successfully on this page. I am trying to isolate the actual company website.
I've tried several codes, but can't get it to work. Here is an example:
print(item.contents[2].find_all("a", {"class": "track-visit-website"})[0].a)
The site is YP.com Septic Search
Here's the code from the one of the items on the site:
<div class="info">
<h3 class="n">
<div class="info-section info-primary">
<p class="adr" itemprop="address" itemtype="http://schema.org/PostalAddress" itemscope="">
<span class="street-address" itemprop="streetAddress">2806 Farview Dr</span>
<span class="locality" itemprop="addressLocality">Fort Collins, </span>
<span itemprop="addressRegion">CO</span>
<span itemprop="postalCode">80524</span>
</p>
<div class="phones phone primary" itemprop="telephone">(970) 829-0852</div>
</div>
<div class="info-section info-secondary">
<div class="categories">
<div class="links">
<a class="track-visit-website" data-analytics="{"click_id":6,"act":2,"dku":"http://www.affordablesepticanddraincleaning.com","FL":"url","TL":"off","target":"website","LOC":"http://www.affordablesepticanddraincleaning.com"}" target="_blank" rel="nofollow" href="http://www.affordablesepticanddraincleaning.com" data-impressed="1">Website</a>
<a class="track-map-it directions" data-analytics="{"click_id":13,"target":"website","act":4}" href="/listings/1000775636908/directions" data-impressed="1">Directions</a>
<a class="track-more-info" data-analytics="{"click_id":7,"target":"moreInfo","act":1,"FL":"list"}" href="/fort-collins-co/mip/affordable-septic-drain-cleaning-llc-505109997?lid=1000775636908" data-impressed="1">More Info</a>
</div>
Copy this code snippet to a python file and run it
import re
content = """
<div class="info">
<h3 class="n">
<div class="info-section info-primary">
<p class="adr" itemprop="address" itemtype="http://schema.org/PostalAddress" itemscope="">
<span class="street-address" itemprop="streetAddress">2806 Farview Dr</span>
<span class="locality" itemprop="addressLocality">Fort Collins, </span>
<span itemprop="addressRegion">CO</span>
<span itemprop="postalCode">80524</span>
</p>
<div class="phones phone primary" itemprop="telephone">(970) 829-0852</div>
</div>
<div class="info-section info-secondary">
<div class="categories">
<div class="links">
<a class="track-visit-website" data-analytics="{"click_id":6,"act":2,"dku":"http://www.affordablesepticanddraincleaning.com","FL":"url","TL":"off","target":"website","LOC":"http://www.affordablesepticanddraincleaning.com"}" target="_blank" rel="nofollow" href="http://www.affordablesepticanddraincleaning.com" data-impressed="1">Website</a>
<a class="track-map-it directions" data-analytics="{"click_id":13,"target":"website","act":4}" href="/listings/1000775636908/directions" data-impressed="1">Directions</a>
<a class="track-more-info" data-analytics="{"click_id":7,"target":"moreInfo","act":1,"FL":"list"}" href="/fort-collins-co/mip/affordable-septic-drain-cleaning-llc-505109997?lid=1000775636908" data-impressed="1">More Info</a>
</div>
"""
websites = set(re.findall(r'http://[a-zA-Z0-9\.]*\.[a-z]{2,}',content)) # find all urls in the content
websites = list(websites)
print(websites) # or in python2 => print websites
Now find a way to incorporate that into your code, get the html, save it as content, regex it and save to file
Web scraping you have to know regex
read up on regex, a good tutorial is here regex tutorial