I'm using Scrapy and i'm trying to scrape something like this:
<html>
<div class='hello'>
some elements
.
.
.
</div>
<div class='hi there'>
<div>
<h3> title </h3>
<h4> another title </h4>
<p> some text ..... </p>
"some text without any tag"
<div class='article'>
some elements
.
.
</div>
<div class='article'>
some elements
.
.
</div>
<div class='article'>
some elements
.
.
</div>
</div>
</div>
</html>
and if I want to extract the text from all elements under the div with class name 'hi there' and before the divs with class name 'article', is there any possible way wither with XPath or CSS selectors?
Never used Scrapy.
Have no idea of what functions it has but,
//div[#class='hi there']/div/(div[#class='article'])[1]/preceding-sibling::*
picks out elements before div with "article" class and,
//div[#class='hi there']/div/(div[#class='article'])[1]/preceding-sibling::text()
gives you inner texts before article div.
Related
I have a soup of this format:
<div class = 'foo'>
<table> </table>
<p> </p>
<p> </p>
<p> </p>
<div class = 'bar'>
<p> </p>
.
.
</div>
I want to scrape all the paragraphs between the table and bar div. The challenge is that number of paragraphs between these is not constant. So I can't just get the first three paragraphs (it could be anywhere from 1-5).
How do I go about dividing this soup to get the the paragraphs. Regex seems decent at first, but it didn't work for me as later I would still need a soup object to allow for further extraction.
Thanks a ton
You could select your element, iterate over its siblings and break if there is no p:
for t in soup.div.table.find_next_siblings():
if t.name != 'p':
break
print(t)
or other way around and closer to your initial question - select the <div class = 'bar'> and find_previous_siblings('p'):
for t in soup.select_one('.bar').find_previous_siblings('p'):
print(t)
Example
from bs4 import BeautifulSoup
html='''
<div class = 'foo'>
<table> </table>
<p> </p>
<p> </p>
<p> </p>
<div class = 'bar'>
<p> </p>
.
.
</div>
'''
soup = BeautifulSoup(html)
for t in soup.div.table.find_next_siblings():
if t.name != 'p':
break
print(t)
Output
<p> </p>
<p> </p>
<p> </p>
If html as shown then just use :not to filter out later sibling p tags
from bs4 import BeautifulSoup
html='''
<div class = 'foo'>
<table> </table>
<p> </p>
<p> </p>
<p> </p>
<div class = 'bar'>
<p> </p>
.
.
</div>
'''
soup = BeautifulSoup(html)
soup.select('.foo > table ~ p:not(.bar ~ p)')
I would like to get movie names available between "tracked_by" id to "buzz_off" id. I have already created a selector which can grab names after "tracked_by" id. However, my intention is to let the script do the parsing UNTIL it finds "buzz_off" id. The elements within which the names are:
html = '''
<div class="list">
<a id="allow" name="allow"></a>
<h4 class="cluster">Allow</h4>
<div class="base min">Sally</div>
<div class="base max">Blood Diamond</div>
<a id="tracked_by" name="tracked_by"></a>
<h4 class="cluster">Tracked by</h4>
<div class="base min">Gladiator</div>
<div class="base max">Troy</div>
<a id="buzz_off" name="buzz_off"></a>
<h4 class="cluster">Buzz-off</h4>
<div class="base min">Heat</div>
<div class="base max">Matrix</div>
</div>
'''
from lxml import html as htm
root = htm.fromstring(html)
for item in root.cssselect("a#tracked_by ~ div.base a"):
print(item.text)
The selector I've tried with (also mentioned in the above script):
a#tracked_by ~ div.base a
Results I'm having:
Gladiator
Troy
Heat
Matrix
Results I would like to get:
Gladiator
Troy
Btw, I would like to parse the names using this selector not to style.
this is a reference for css selectors. As you can see, it doesn't have any form of logic, as it is not a programming language. You'd have to use a while not loop in python and handle each element one at a time, or append them to a list.
I need to get div text with class _50x4 using 5pxsel:
<div...>
<i class="5pxsel">
<div>
<div>
<div class="_50x4">
Work in
<a>London</a>
<div class="_50x4">
Work in
<a> Germany </a>
I need to get text using class 5pxsel, not _50x4, and get only first result - 'Work in London'.
trt with following x-path
//*[#class="5pxsel"]/following-sibling::div/div/div[#class='_50x4']
My intention is to scrape the names of the top-selling products on Ali-Express.
I'm using the Requests library alongside Beautiful Soup to accomplish this.
# Remember to import BeautifulSoup, requests and pprint
url = "https://bestselling.aliexpress.com/en?spm=2114.11010108.21.3.qyEJ5m"
soup = bs(req.get(url).text, 'html.parser')
#pp.pprint(soup) Verify that the page has been found
all_items = soup.find_all('li',class_= 'top10-item')
pp.pprint(all_items)
# []
However this returns an empty list, indicating that soup_find_all() did not find any tags fitting that criteria.
Inspect Element in Chrome displays the list items as such
.
However in source code (ul class = "top10-items") contains a script, which seems to iterate through each list item (I'm not familiar with HTML).
<div class="container">
<div class="top10-header"><span class="title">TOP SELLING</span> <span class="sub-title">This week's most popular products</span></div>
<ul class="top10-items loading" id="bestselling-top10">
</ul>
<script class="X-template-top10" type="text/mustache-template">
{{#topList}}
<li class="top10-item">
<div class="rank-orders">
<span class="rank">{{rank}}</span>
<span class="orders">{{productOrderNum}}</span>
</div>
<div class="img-wrap">
<a href="{{productDetailUrl}}" target="_blank">
<img src="{{productImgUrl}}" alt="{{productName}}">
</a>
</div>
<a class="item-desc" href="{{productDetailUrl}}" target="_blank">{{productName}}</a>
<p class="item-price">
<span class="price">US ${{productMinPrice}}</span>
<span class="uint">/ {{productUnitType}}</span>
</p>
</li>
{{/topList}}</script>
</div>
</div>
So this probably explains why soup.find_all() doesn't find the "li" tag.
My question is: How can I extract the item names from the script using Beautiful soup?
HTML of page:
<form name="compareprd" action="">
<div class="gridBox product " id="quickLookItem-1">
<div class="gridItemTop">
</div>
</div>
<div class="gridBox product " id="quickLookItem-2">
<div class="gridItemTop">
</div>
</div>
<!-- many more like this. -->
I am using Beautiful soup to scrap a page. In that page I am able to get a form tag by its name.
tag = soup.find("form", {"name": "compareprd"})
Now I want to count all immediate child divs but not all nested divs.
Say for example there are 20 immediate divs inside form.
I tried :
len(tag.findChildren("div"))
But It gives 1500.
I think it gives all "div" inside "form" tag.
Any help appreciated.
You can use a single css selector form[name=compareprd] > div which will find div's that are immediate children of the form:
html = """<form name="compareprd" action="">
<div class="gridBox product " id="quickLookItem-1">
<div class="gridItemTop">
</div>
</div>
<div class="gridBox product " id="quickLookItem-2">
<div class="gridItemTop">
</div>
</div>
</form>"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html)
print(len(soup.select("form[name=compareprd] > div")))
Or as commented pass recursive=True but use find_all, findChildren goes back to the bs2 days and is only provided for backwards compatability.
len(tag.find_all("div", recursive=False)