I am messing around with some web scraping using Splinter but have this issue. The html basically has loads of li only some of which I am interested in. The ones I am interested in have a bid value. Now, I know for Beautiful Soup I can do
tab = browser.find_by_css('li', {'bid': '18663145091'})
but this doesn't seem to work for splinter. I get an error saying:
find_by_css() takes exactly 2 arguments (3 given)
This is a sample of my html:
<li class="rugby" bid="18663145091">
<span class="info">
<div class="points">
12
</div>
<img alt="Leinster" height="19" src="..Leinster" width="26"/>
</span>
</li>
It looks like you are using find_by_css() method as if it was a BeautifulSoup method. Instead, provide a valid CSS selector checking the value of the bid attribute:
tab = browser.find_by_css('li[bid=18663145091]')
Related
I was learning web scraping, and the 'li' tag is not showing when I run soup.findAll
Here's the html:
<label>
<input type="checkbox">
<ul class="dropdown-content">
<li>
<a href=stuff</a>
</li>
</ul>
</label>
I tried:
soup = BeautifulSoup(r.content,'html5lib')
dropdown = soup.findAll('ul', {'class':'dropdown-content'})
print(dropdown)
And it only shows:
[<ul class="dropdown-content"></ul>]
Any help will do. Thanks!
in this command: dropdown = soup.findAll('ul', {'class':'dropdown-content'}), yo search for ul and dropdown-content class.
dropdown = soup.find('ul').findAll('li')
Your selection per se is okay to find the <ul> it may do not contain any <li> cause I assume these elements are generated dynamically by javascript. To validate this, question should be improved and url of website should be provided.
If content is provided dynamically one approach could be to work with selenium that will render the website like a browser and could return the "full" dom.
Note: In new code use find_all() instead of old syntax findAll()
Example
Html in your example is broken, but your code works if any lis are in the ul in your soup.
import requests
from bs4 import BeautifulSoup
html = '''
<label>
<input type="checkbox">
<ul class="dropdown-content">
<li>
</li>
</ul>
</label>
'''
soup = BeautifulSoup(html,'html5lib')
dropdown = soup.find_all('ul', {'class':'dropdown-content'})
print(dropdown)
Output
[<ul class="dropdown-content">
<li>
</li>
</ul>]
Im a beginner in programming all together and work on a project of mine. For that I'm trying to parse data from a website to make a tool that uses the data. I found that BeatifulSoup and Requests are common tools to do it, but unfortunately i can not seem to make it work. It always returns the value None or an error where it says:
"TypeError: 'NoneType' object is not callable"
Did i do anything wrong? Is it maybe not possible to parse some websites data and I'm being restricted the access or something?
If there are other ways to access the data im happy to hear as well.
Here is my code:
from bs4 import BeautifulSoup
import requests
pickrates = {} # dict to store winrate of champions for each position
source = requests.get("http://u.gg/lol/champions/aatrox/build?role=top").text
soup = BeautifulSoup(source, "lxml")
value = soup.find("div", class_="content-section champion-ranking-stats")
print(value.prettify())
Remember when you request a webpage with requests module, you will only get the html of that page. I mean this module is not capable of rendering JavaScript.
Try this code:
import requests
source = requests.get("http://u.gg/lol/champions/aatrox/build?role=top").text
print(source)
Then search for the class names you provided by hand(ctrl + f), there is no such elements at all. It means those are generated by other requests like ajax. They are somehow created after the initial html page is loaded. So before Beautiful soup comes to the party, you can't get them even in .text attribute of the response object.
One way of doing it is to Selenium or any other libraries which handles the JS.
It seems like this question (can't find html tag when I scrape web using beautifulsoup), the problem would be caused by the JavaScript event listener. I would suggest you to use selenium to handle this issue. So, let apply selenium at sending request and getting back page source and then use BeautifulSoup to parse it.
Don't forget to download a browser driver from https://www.selenium.dev/documentation/getting_started/installing_browser_drivers/ and place it in the same directory with your code.
The example of code below is using selenium with Firefox:
from selenium import webdriver
from bs4 import BeautifulSoup
URL = 'http://u.gg/lol/champions/aatrox/build?role=top'
browser = webdriver.Firefox()
browser.get(URL)
soup = BeautifulSoup(browser.page_source, 'html.parser')
time.sleep(1)
browser.close()
value = soup.find("div", class_="content-section champion-ranking-stats")
print(value.prettify())
Your expected output would be like:
>>> print(value.prettify())
<div class="content-section champion-ranking-stats">
<div class="win-rate meh-tier">
<div class="value">
48.4%
</div>
<div class="label">
Win Rate
</div>
</div>
<div class="overall-rank">
<div class="value">
49 / 58
</div>
<div class="label">
Rank
</div>
</div>
<div class="pick-rate">
<div class="value">
3.6%
</div>
<div class="label">
Pick Rate
</div>
</div>
<div class="ban-rate">
<div class="value">
2.3%
</div>
<div class="label">
Ban Rate
</div>
</div>
<div class="matches">
<div class="value">
55,432
</div>
<div class="label">
Matches
</div>
</div>
</div>
I am trying to scrape Instagram page, and want to get/access div-tags present inside of span-tag. but I can't! the HTML of the Instagram page looks like as
<head>--</head>
<body>
<span id="react-root" aria-hidden="false">
<form enctype="multipart/form-data" method="POST" role="presentation">…</form>
<section class="_9eogI E3X2T">
<main class="SCxLW o64aR" role="main">
<div class="v9tJq VfzDr">
<header class=" HVbuG">…</header>
<div class="_4bSq7">…</div>
<div class="fx7hk">…</div>
</div>
</main>
</section>
</body>
I do, it as
from bs4 import BeautifulSoup
import urllib.request as urllib2
html_page = urllib2.urlopen("https://www.instagram.com/cherrified_/?hl=en")
soup = BeautifulSoup(html_page,"lxml")
span_tag = soup.find('span') # return span-tag correctly
span_tag.find_all('div') # return empty list, why ?
please also specify an example.
Instagram is a Single Page Application powered by React, which means its source is just a simple "empty" page that loads JavaScript to dynamically generate the content in the browser after downloading.
Click "View source" or go to view-source:https://www.instagram.com/cherrified_/?hl=en in Chrome. This is the HTML you download with urllib.request.
You can see that there is a single <span> tag, which does not include a <div> tag. (Note: <div> inside a <span> is not allowed).
Scraping instagram.com this way is not possible. It also might not be legal (I am not a lawyer).
Notes:
your HTML code example doesn't include a closing tag for <span>.
your HTML code example doesn't match the link you provide in the python snippet.
in the last line of the python snippet you probably meant span_tag.find_all('div') (note the variable name and the singular 'div').
I am trying to use Python Selenium Firefox Webdriver to grab the h2 content 'My Data Title' from this HTML
<div class="box">
<ul class="navigation">
<li class="live">
<span>
Section Details
</span>
</li>
</ul>
</div>
<div class="box">
<h2>
My Data Title
</h2>
</div>
<div class="box">
<ul class="navigation">
<li class="live">
<span>
Another Section
</span>
</li>
</ul>
</div>
<div class="box">
<h2>
Another Title
</h2>
</div>
Each div has a class of box so I can't easily identify the one I want. Is there a way to tell Selenium to grab the h2 in the box class that comes after the one that has the span called 'Section Details'?
If you want grab the h2 in the box class that comes after the one that has the span with text Section Details try below xpath using preceding :-
(//h2[preceding::span[normalize-space(text()) = 'Section Details']])[1]
or using following :
(//span[normalize-space(text()) = 'Section Details']/following::h2)[1]
and for Another Section just change the span text in xpath as:-
(//h2[preceding::span[normalize-space(text()) = 'Another Section']])[1]
or
(//span[normalize-space(text()) = 'Another Section']/following::h2)[1]
Here is an XPath to select the title following the text "Section Details":
//div[#class='box'][normalize-space(.)='Section Details']/following::h2
yeah, you need to do some complicated xpath searching:
referenceElementList = driver.find_elements_by_xpath("//span")
for eachElement in referenceElementList:
if eachElement.get_attribute("innerHTML") == 'Section Details':
elementYouWant = eachElement.find_element_by_xpath("../../../following-sibling::div/h2")
elementYouWant.get_attribute("innerHTML") should give you "My Data Title"
My code reads:
find all span elements regardless of where they are in HTML and store them in a list called referenceElementList;
iterate all span elements in referenceElementList one by one, looking for a span whose innerHTML attribute is 'Section Details'.
if there is a match, we have found the span, and we navigate backwards three levels to locate the enclosing div[#class='box'], and find this div element next sibling, which is the second div element,
Lastly, we locate the h2 element from its parent.
Can you please tell me if my code works? I might have gone wrong somewhere navigating backwards.
There is potential difficulty you may encounter, the innerHTML attribute may contain tab, new line and space characters, in that case, you need regex to do some filtering first.
This I the Link where I am trying to fetch data flipkart
and the part of code :
<div class="toolbar-wrap line section">
<div class="ratings-reviews-wrap">
<div itemprop="aggregateRating" itemscope="" itemtype="http://schema.org/AggregateRating" class="ratings-reviews line omniture-field">
<div class="ratings">
<meta itemprop="ratingValue" content="1">
<div class="fk-stars" title="1 stars">
<span class="unfilled">★★★★★</span>
<span class="rating filled" style="width:20%">
★★★★★
</span>
</div>
<div class="count">
<span itemprop="ratingCount">2</span>
</div>
</div>
</div>
</div>
</div>
here I have to fetch 1 star from title= 1 star and 2 from <span itemprop="ratingCount">2</span>
I try the following code
x = link_soup.find_all("div",class_='fk-stars')[0].get('title')
print x, " product_star"
y = link_soup.find_all("span",itemprop="ratingCount")[0].string.strip()
print y
but It give the
IndexError: list index out of range
The content that you see in the browser is not actually present in the raw HTML that is retrieved from this URL.
When loaded with a browser, the page executes AJAX calls to load additional content, which is then dynamically inserted into the page. One of the calls gets the ratings info that you are after. Specifically this URL is the one that contains the HTML that is inserted as the "action bar".
But if you retrieve the main page using Python, e.g. with requests, urllib et. al., the dynamic content is not loaded and that is why BeautifulSoup can't find the tags.
You could analyse the main page to find the actual link, retrieve that, and then run it through BeautifulSoup. The link looks like it begins with /p/pv1/spotList1/spot1/actionBar so that, or perhaps actionBar is sufficient to locate the actual link.
Or you could use selenium to load the page and then grab and process the rendered HTML.