Scrapy is used to parse an html page. My question is why sometimes scrapy returns the response I want, but sometimes does not return a response. Is it my fault? Here's my parsing function:
class AmazonSpider(BaseSpider):
name = "amazon"
allowed_domains = ["amazon.org"]
start_urls = [
"http://www.amazon.com/s?rh=n%3A283155%2Cp_n_feature_browse-bin%3A2656020011"
]
def parse(self, response):
sel = Selector(response)
sites = sel.xpath('//div[contains(#class, "result")]')
items = []
titles = {'titles': sites[0].xpath('//a[#class="title"]/text()').extract()}
for title in titles['titles']:
item = AmazonScrapyItem()
item['title'] = title
items.append(item)
return items
I believe you are just not using the most adequate XPath expression.
Amazon's HTML is kinda messy, not very uniform and therefore not very easy to parse. But after some experimenting I could extract all the 12 titles of a couple of search results with the following parse function:
def parse(self, response):
sel = Selector(response)
p = sel.xpath('//div[#class="data"]/h3/a')
titles = p.xpath('span/text()').extract() + p.xpath('text()').extract()
items = []
for title in titles:
item = AmazonScrapyItem()
item['title'] = title
items.append(item)
return items
If you care about the actual order of the results the above code might not be appropriate but I believe that is not the case.
Related
import scrapy
class rlgSpider(scrapy.Spider):
name = 'bot'
start_urls = [
'https://rocket-league.com/trading?filterItem=0&filterCertification=0&filterPaint=0&filterPlatform=1&filterSearchType=1&filterItemType=0&p=1']
def parse(self, response):
data = {}
offers = response.xpath('//div[#class = "col-3-3"]')
for offer in offers:
for item in offer.xpath('//div[#class = "rlg-trade-display-container is--user"]/div[#class = "rlg-trade-display-items"]/div[#class = "col-1-2 rlg-trade-display-items-container"]/a'):
data['name'] = item.xpath('//div/div[#position ="relative"]/h2').extarct()
yield data
Here is what I did so far - it doesn't work well. It scrapes the url and not the h2 tag how do I do that when it's inside so many divs?
In order to parse though an element in scrapy you need to start your xpath with "." else you will be parsing through the response, this is the correct way of doing it.
def parse(self, response):
offers = response.xpath('//div[#class = "col-3-3"]')
for offer in offers:
for item in offer.xpath('.//div[#class = "rlg-trade-display-container is--user"]/div[#class = "rlg-trade-display-items"]/div[#class = "col-1-2 rlg-trade-display-items-container"]/a'):
data = {}
data['name'] = item.xpath('.//h2/text()').extarct_first()
yield data
I'm trying to scrape this website https://phdessay.com/free-essays/.
I need to find the maximum number of pages so that I can append the URLs with page numbers to the start_urls list. I'm not able to figure out how to do that.
Here's my code so far,
class PhdessaysSpider(scrapy.Spider):
name = 'phdessays'
start_urls = ['https://phdessay.com/free-essays/']
def parse(self, response):
all_essay_urls = response.css('.phdessay-card-read::attr(href)').getall()
for essay_url in all_essay_urls:
yield scrapy.Request(essay_url, callback=self.parse_essay_contents)
def parse_essay_contents(self, response):
items = PhdEssaysItem()
essay_title = response.css('.site-title::text').get()
essay_url = response.request.url
items['essay_title'] = essay_title
items['essay_url'] = essay_url
yield items
In the above code, I'm following each essay to it's individual page and am scraping the URL and the title (I will be scraping the content which is the reason why I'm following the individual essay URL).
This works just fine for the starting page; but there are about 1677 pages which might change in the future. I would like to scrape this maximum_no_of_pages number and then append all links with all page numbers.
What you could do is to find the last page number and then do a range loop to yield next pages requests.
Something like this:
class PhdessaysSpider(scrapy.Spider):
name = 'phdessays'
start_urls = ['https://phdessay.com/free-essays/']
def parse(self, response):
max_page = int(response.css('.page-numbers::text').getall()[-1])
for page_number in range(1, max_page + 1):
page_url = f'https://phdessay.com/free-essays/page/{page_number}/'
yield scrapy.Request(page_url, callback=self.parse_page)
def parse_page(self, response):
all_essay_urls = response.css('.phdessay-card-read::attr(href)').getall()
for essay_url in all_essay_urls:
yield scrapy.Request(essay_url, callback=self.parse_essay_contents)
def parse_essay_contents(self, response):
items = PhdEssaysItem()
essay_title = response.css('.site-title::text').get()
essay_url = response.request.url
items['essay_title'] = essay_title
items['essay_url'] = essay_url
yield items
I want to extract the title and the pdf link of each paper in this link: https://iclr.cc/Conferences/2019/Schedule?type=Poster
My code is here
class ICLRCrawler(Spider):
name = "ICLRCrawler"
allowed_domains = ["iclr.cc"]
start_urls = ["https://iclr.cc/Conferences/2019/Schedule?type=Poster", ]
def parse(self, response):
papers = Selector(response).xpath('//*[#id="content"]/div/div[#class="paper"]')
titles = Selector(response).xpath('//*[#id="maincard_704"]/div[3]')
links = Selector(response).xpath('//*[#id="maincard_704"]/div[6]/a[2]')
for title, link in zip(titles, links):
item = PapercrawlerItem()
item['title'] = title.xpath('text()').extract()[0]
item['pdf'] = link.xpath('/#href').extract()[0]
item['sup'] = ''
yield item
However, it seems that it is not easy to get the title and link of each paper. Here, how can I change the code to get the data?
You can use much simpler approach:
def parse(self, response):
for poster in response.xpath('//div[starts-with(#id, "maincard_")]'):
item = PapercrawlerItem()
item["title"] = poster.xpath('.//div[#class="maincardBody"]/text()[1]').get()
item["pdf"] = poster.xpath('.//a[#title="PDF"]/#href').get()
yield item
you have to replace Extract()[0] with get_attribute('href')
I am scraping some news website with scrapy framework, it seems only store the last item scraped and repeated in loop
I want to store the Title,Date,and Link, which i scrape from the first page
and also store the whole news article. So i want to merge the article which stored in a list into a single string.
Item code
import scrapy
class ScrapedItem(scrapy.Item):
# define the fields for your item here like:
title = scrapy.Field()
source = scrapy.Field()
date = scrapy.Field()
paragraph = scrapy.Field()
Spider code
import scrapy
from ..items import ScrapedItem
class CBNCSpider(scrapy.Spider):
name = 'kontan'
start_urls = [
'https://investasi.kontan.co.id/rubrik/28/Emiten'
]
def parse(self, response):
box_text = response.xpath("//ul/li/div[#class='ket']")
items = ScrapedItem()
for crawl in box_text:
title = crawl.css("h1 a::text").extract()
source ="https://investasi.kontan.co.id"+(crawl.css("h1 a::attr(href)").extract()[0])
date = crawl.css("span.font-gray::text").extract()[0].replace("|","")
items['title'] = title
items['source'] =source
items['date'] = date
yield scrapy.Request(url = source,
callback=self.parseparagraph,
meta={'item':items})
def parseparagraph(self, response):
items_old = response.meta['item'] #only last item stored
paragraph = response.xpath("//p/text()").extract()
items_old['paragraph'] = paragraph #merge into single string
yield items_old
I expect the output that the Date,Title,and Source can be updated through the loop.
And the article can be merged into single string to be stored in mysql
I defined an empty dictionary and put those variables within it. Moreover, I've brought about some minor changes in your xpaths and css selectors to make them less error prone. The script is working as desired now:
import scrapy
class CBNCSpider(scrapy.Spider):
name = 'kontan'
start_urls = [
'https://investasi.kontan.co.id/rubrik/28/Emiten'
]
def parse(self, response):
for crawl in response.xpath("//*[#id='list-news']//*[#class='ket']"):
d = {}
d['title'] = crawl.css("h1 > a::text").get()
d['source'] = response.urljoin(crawl.css("h1 > a::attr(href)").get())
d['date'] = crawl.css("span.font-gray::text").get().strip("|")
yield scrapy.Request(
url=d['source'],
callback=self.parseparagraph,
meta={'item':d}
)
def parseparagraph(self, response):
items_old = response.meta['item']
items_old['paragraph'] = response.xpath("//p/text()").getall()
yield items_old
I'm using scrapy for scraping some pages and I want in each row:
Title
Url
Author
The problem is that (sometimes) there are more titles and urls but the author comes just one time in each page. So I want to add the respective author to urls and titles (which come out fine).
This is my (bad) code, I tried to make a loop but it doesn't work very well I think, plus, it raises me the error "Spider must return Request, BaseItem, dict or None, got 'list'". Can you tell me where is my mistake?
def parse(self, response):
sels = response.xpath('//td[#class="default"]')
items = []
for sel in sels:
item = ThisItem()
item['URL'] = sel.xpath('//td[#class]/a/#href').extract()
item['TITLE'] = sel.xpath('//td[#class]/a').extract()
i = item['TITLE']
for i in sels:
item['AUTHOR'] = sel.xpath('//td[#class]/b[1]').extract()
items.append(item)
yield items
Thanks in advance.
You should yield every item separately. Try this
def parse(self, response):
author = response.xpath('//td[#class]/b[1]').extract()
for sel in response.xpath('//td[#class="default"]'):
item = ThisItem()
item['URL'] = sel.xpath('//td[#class]/a/#href').extract()
item['TITLE'] = sel.xpath('//td[#class]/a').extract()
item['AUTHOR'] = author
yield item