Scrapy - how to manage pagination without 'Next' button? - python

I'm scraping the content of articles from a site like this where there is no 'Next' button to follow. ItemLoader is passed from parse_issue in the response.meta object as well as some additional data like section_name. Here is the function:
def parse_article(self, response):
self.logger.info('Parse function called parse_article on {}'.format(response.url))
acrobat = response.xpath('//div[#class="txt__lead"]/p[contains(text(), "Plik do pobrania w wersji (pdf) - wymagany Acrobat Reader")]')
limiter = response.xpath('//p[#class="limiter"]')
if not acrobat and not limiter:
loader = ItemLoader(item=response.meta['periodical_item'].copy(), response=response)
loader.add_value('section_name', response.meta['section_name'])
loader.add_value('article_url', response.url)
loader.add_xpath('article_authors', './/p[#class="l doc-author"]/b')
loader.add_xpath('article_title', '//div[#class="cf txt "]//h1')
loader.add_xpath('article_intro', '//div[#class="txt__lead"]//p')
article_content = response.xpath('.//div[#class=" txt__rich-area"]//p').getall()
# # check for pagiantion
next_page_url = response.xpath('//span[#class="pgr_nrs"]/span[contains(text(), 1)]/following-sibling::a[1]/#href').get()
if next_page_url:
# I'm not sure what should be here... Something like this: (???)
yield response.follow(next_page_url, callback=self.parse_article, meta={
'periodical_item' : loader.load_item(),
'article_content' : article_content
})
else:
loader.add_xpath('article_content', article_content)
yield loader.load_item()
The problem is in parse_article function: I don't know how to combine the content of paragraphs from all pages into the one item. Does anybody know how to solve this?

Your parse_article looks good. If the issue is just adding the article_content to the loader, you just needed to fetch it from the response.meta:
I would update this line:
article_content = response.meta.get('article_content', '') + response.xpath('.//div[#class=" txt__rich-area"]//p').getall()

Just set the next page URL to iterate over X amount.
I noticed that article had 4 pages but some could be more
They are simply distinguished by adding /2 or /3 to the end of the URL e.g
https://www.gosc.pl/doc/791526.Zaloz-zbroje/
https://www.gosc.pl/doc/791526.Zaloz-zbroje/2
https://www.gosc.pl/doc/791526.Zaloz-zbroje/3
I don't use scrapy. But when I need multiple pages I would normally just iterate.
When you first scrape the page. Find the max amount of pages for that article first . On that site for example it says 1/4 so you know you will need 4 pages in total.
url = "https://www.gosc.pl/doc/791526.Zaloz-zbroje/"
data_store = ""
for i in range(1, 5):
actual_url = "{}{}".format(url, I)
scrape_stuff = content_you_want
data_store += scrape_stuff
# format the collected data

Related

How I know which URL are in use at starts_url (Scrapy)?

I'm building a Scrapy that crawling under two pages (e.x: PageDucky, PageHorse), and I pass that two pages in a starts_url field.
But for pagination, I need to pass my URL and concatenate with "?page=", so I can't pass the entire list.
I already tried to make a for loop, but without success.
Anyone does how can I make the pagination work for both pages?
Here is my code for now:
class QuotesSpider(scrapy.Spider):
name = 'QuotesSpider'
start_urls = ['https://PageDucky.com', 'https://PageHorse.com']
categories = []
count = 1
def parse(self, response):
# Get categories
urli = response.url
QuotesSpider.categories = urli[urli.find('/browse')+7:].split('/')
QuotesSpider.categories.pop(0)
#GET ITEMS PER PAGE AND CALC THE PAGINATION
items = int(response.xpath(
'*//div[#id="body"]/div/label[#class="item-count"]/text()').get().replace(' items', ''))
pages = items / 10
#CALL THE OTHER DEF TO READ THE PAGE ITSELF
for i in response.css('div#body div a::attr(href)').getall():
if i[:5] == '/item':
yield scrapy.Request('http://mainpage' + i, callback=self.parseobj)
#HERE IS THE PROBLEM, I TESTED AND WITHOUT FOR LOOP WORKS FOR ONE URL ONLY
for y in QuotesSpider.start_urls:
if pages >= QuotesSpider.count:
next_page = y + '?page=' + str(QuotesSpider.count)
QuotesSpider.count = QuotesSpider.count + 1
yield scrapy.Request(next_page, callback=self.parse)
Whatever website you're scraping, find the xpath/css location where the 'next page' button is. Get the href of that, and yield your next request to that link.
Alternatively you don't need to use start_urls if you write your own start_requests function, where you can put custom logic inside of it, like looping through your desired urls and appendimng the correct page number to each. See: https://docs.scrapy.org/en/latest/topics/spiders.html#scrapy.spiders.Spider.start_requests
UPDATE WITH SOLUTION
I can't use "href" because isn't the same link, for example the page 01 was 'https:pageducky.com' and the page 02 was 'https:duckyducky.com?page=2'
So I use response.url and manipulate the string considering the ?page=... something like that:
resp1 = response.url[:response.url.find('?page=')]
resp = resp1 + '?page=' + str(QuotesSpider.count)

How to get href from the entire page with scrapy (proper css selector)?

I'm trying to scrape a real-estate website: https://www.nepremicnine.net/oglasi-prodaja/slovenija/hisa/. I would like to get the href that is hidden in the tag of the house images:
I would like to get this for the whole page (and other pages). Here is the code I wrote that returns nothing (e.g. empty dictionary):
import scrapy
from ..items import RealEstateSloItem
import time
# first get all the URLs that have more info on the houses
# next crawl those URLs to get the desired information
class RealestateSpider(scrapy.Spider):
# allowed_domains = ['nepremicnine.net']
name = 'realestate'
page_number = 2
# page 1 url
start_urls = ['https://www.nepremicnine.net/oglasi-prodaja/slovenija/hisa/1/']
def parse(self, response):
items = RealEstateSloItem() # create it from items class --> need to store it down
all_links = response.css('a.slika a::attr(href)').extract()
items['house_links'] = all_links
yield items
next_page = 'https://www.nepremicnine.net/oglasi-prodaja/slovenija/hisa/' + str(RealestateSpider.page_number) + '/'
#print(next_page)
# if next_page is not None: # for buttons
if RealestateSpider.page_number < 180: # then only make sure to go to the next page
# if yes then increase it --> for paginations
time.sleep(1)
RealestateSpider.page_number += 1
# parse automatically checks for response.follow if its there when its done with this page
# this is a recursive function
# follow next page and where should it after following
yield response.follow(next_page, self.parse) # want it to go back to parse
Could you tell me what I am doing wrong here with css selectors?
Your selector is looking for an a element inside the a.slika. This should solve your issue:
all_links = response.css('a.slika ::attr(href)').extract()
Those will be relative urls, you can use response.urljoin() to build the absolute url using your response url as base domain.

struggling with Scrapy

I'm new to scrapy and I struggle a little with a special case.
Here is the scenario :
I want to scrap a website where there is a list of books.
httpx://...bookshop.../archive is the page where all the 10 firsts books are listed.
Then I want to get the informations (name, date, author) of all the books in the list. I have to go on another page for each books:
httpx://...bookshop.../book/{random_string}
So there is two types of request :
One for refreshing the list of books.
Another one for getting the book informations.
But some books can be added to the list at anytime.
So I would like to refresh the list every minutes.
and I also want to delay all the request by 5 seconds.
Here my basic solution, but it only works for one "loop" :
First I set the delay in settings.py :
DOWNLOAD_DELAY = 5
then the code of my spider :
from scrapy.loader import ItemLoader
class bookshopScraper(scrapy.Spider):
name = "bookshop"
url = "httpx://...bookshop.../archive"
history = []
last_refresh = 0
def start_requests(self):
self.last_refresh = time.time()
yield scrapy.Request(url=self.url, callback=self.parse)
def parse(self, response):
page = response.url.split("/")[3]
if page == 'archive':
return self.parse_archive(response)
else:
return self.parse_book(response)
def parse_archive(self, response):
links = response.css('SOME CSS ').extract()
for link in links:
if link not in self.history:
self.history.append(link)
yield scrapy.Request(url="httpx://...bookshop.../book/" + link, callback=self.parse)
if len(self.history) > 10:
n = len(self.history) - 10
self.history = history[-n:]
def parse_book(self, response):
"""
Load Item
"""
Now I would like to do something like :
if(time.time() > self.last_refresh + 80):
self.last_refresh = time.time()
return scrapy.Request(url=self.url, callback=self.parse, dont_filter=True)
But I really don't know how to implement this.
PS : I want the same instance of scrapy to run all the time without stopping.

How to get scrapy spider to add information to an item based on a CSV file

As some of you may have gathered, I'm learning scrapy to scrape some data off of Google Scholar for a research project that I am running. I have a file that contains many article titles for which I am scraping citations. I read in the file using pandas, generate the URLs that need scraping, and start scraping.
One problem that I face is 503 errors. Google shuts me off fairly quickly, and many entries remain unscraped. This is a problem that I am working on using some middleware provided by Crawlera.
Another problem I face is that when I export my scraped data, I have a hard time matching the scraped data to what I was trying to look for. My input data is a CSV file with three fields -- 'Authors','Title','pid' where 'pid' is a unique identifier.
I use pandas to read in the file and generate URLs for scholar based off the title. Each time a given URL is scraped, my spider goes through the scholar webpage, and picks up the title, publication information and cites for each article listed on that page.
Here is how I generate the links for scraping:
class ScholarSpider(Spider):
name = "scholarscrape"
allowed_domains = ["scholar.google.com"]
# get the data
data = read_csv("../../data/master_jeea.csv")
# get the titles
queries = data.Title.apply(urllib.quote)
# generate a var to store links
links = []
# create the URLs to crawl
for entry in queries:
links.append("http://scholar.google.com/scholar?q=allintitle%3A"+entry)
# give the URLs to scrapy
start_urls = links
For example, one title from my data file could be the paper 'Elephants Don't Play Chess' by Rodney Brooks with 'pid' 5067. The spider goes to
http://scholar.google.com/scholar?q=allintitle%3Aelephants+don%27t+play+chess
Now on this page, there are six hits. The spider gets all six hits, but they need to be assigned the same 'pid'. I know I need to insert a line somewhere that reads something like item['pid'] = data.pid.apply("something") but I can't figure out exactly how I would do that.
Below is the rest of the code for my spider. I am sure the way to do this is pretty straightforward, but I can't think of how to get the spider to know which entry of data.pid it should look for if that makes sense.
def parse(self, response):
# initialize something to hold the data
items=[]
sel = Selector(response)
# get each 'entry' on the page
# an entry is a self contained div
# that has the title, publication info
# and cites
entries = sel.xpath('//div[#class="gs_ri"]')
# a counter for the entry that is being scraped
count = 1
for entry in entries:
item = ScholarscrapeItem()
# get the title
title = entry.xpath('.//h3[#class="gs_rt"]/a//text()').extract()
# the title is messy
# clean up
item['title'] = "".join(title)
# get publication info
# clean up
author = entry.xpath('.//div[#class="gs_a"]//text()').extract()
item['authors'] = "".join(author)
# get the portion that contains citations
cite_string = entry.xpath('.//div[#class="gs_fl"]//text()').extract()
# find the part that says "Cited by"
match = re.search("Cited by \d+",str(cite_string))
# if it exists, note the number
if match:
cites = re.search("\d+",match.group()).group()
# if not, there is no citation info
else:
cites = None
item['cites'] = cites
item['entry'] = count
# iterate the counter
count += 1
# append this item to the list
items.append(item)
return items
I hope this question is well-defined, but please let me know if I can be more clear. There is really not much else in my scraper except some lines at the top importing things.
Edit 1: Based on suggestions below, I have modified my code as follows:
# test-case: http://scholar.google.com/scholar?q=intitle%3Amigratory+birds
import re
from pandas import *
import urllib
from scrapy.spider import Spider
from scrapy.selector import Selector
from scholarscrape.items import ScholarscrapeItem
class ScholarSpider(Spider):
name = "scholarscrape"
allowed_domains = ["scholar.google.com"]
# get the data
data = read_csv("../../data/master_jeea.csv")
# get the titles
queries = data.Title.apply(urllib.quote)
pid = data.pid
# generate a var to store links
urls = []
# create the URLs to crawl
for entry in queries:
urls.append("http://scholar.google.com/scholar?q=allintitle%3A"+entry)
# give the URLs to scrapy
start_urls = (
(urls, pid),
)
def make_requests_from_url(self, (url,pid)):
return Request(url, meta={'pid':pid}, callback=self.parse, dont_filter=True)
def parse(self, response):
# initialize something to hold the data
items=[]
sel = Selector(response)
# get each 'entry' on the page
# an entry is a self contained div
# that has the title, publication info
# and cites
entries = sel.xpath('//div[#class="gs_ri"]')
# a counter for the entry that is being scraped
count = 1
for entry in entries:
item = ScholarscrapeItem()
# get the title
title = entry.xpath('.//h3[#class="gs_rt"]/a//text()').extract()
# the title is messy
# clean up
item['title'] = "".join(title)
# get publication info
# clean up
author = entry.xpath('.//div[#class="gs_a"]//text()').extract()
item['authors'] = "".join(author)
# get the portion that contains citations
cite_string = entry.xpath('.//div[#class="gs_fl"]//text()').extract()
# find the part that says "Cited by"
match = re.search("Cited by \d+",str(cite_string))
# if it exists, note the number
if match:
cites = re.search("\d+",match.group()).group()
# if not, there is no citation info
else:
cites = None
item['cites'] = cites
item['entry'] = count
item['pid'] = response.meta['pid']
# iterate the counter
count += 1
# append this item to the list
items.append(item)
return items
You need to populate your list start_urls with tuples (url, pid).
Now redefine the method make_requests_from_url(url):
class ScholarSpider(Spider):
name = "ScholarSpider"
allowed_domains = ["scholar.google.com"]
start_urls = (
('http://www.scholar.google.com/', 100),
)
def make_requests_from_url(self, (url, pid)):
return Request(url, meta={'pid': pid}, callback=self.parse, dont_filter=True)
def parse(self, response):
pid = response.meta['pid']
print '!!!!!!!!!!!', pid, '!!!!!!!!!!!!'
pass

Scrapy spider get information that is inside of links

I have done and spider that can take the information of this page and it can follow "Next page" links. Now, the spider just takes the information that i'm showing in the following structure.
The structure of the page is something like this
Title 1
URL 1 ---------> If you click you go to one page with more information
Location 1
Title 2
URL 2 ---------> If you click you go to one page with more information
Location 2
Next page
Then, that i want is that the spider goes on each URL link and get full information. I suppose that i must generate another rule that specify that i want do something like this.
The behaviour of the spider it should be:
Go to URL1 (get info)
Go to URL2 (get info)
...
Next page
But i don't know how i can implement it. Can someone guide me?
Code of my Spider:
class BcnSpider(CrawlSpider):
name = 'bcn'
allowed_domains = ['guia.bcn.cat']
start_urls = ['http://guia.bcn.cat/index.php?pg=search&q=*:*']
rules = (
Rule(
SgmlLinkExtractor(
allow=(re.escape("index.php")),
restrict_xpaths=("//div[#class='paginador']")),
callback="parse_item",
follow=True),
)
def parse_item(self, response):
self.log("parse_item")
sel = Selector(response)
sites = sel.xpath("//div[#id='llista-resultats']/div")
items = []
cont = 0
for site in sites:
item = BcnItem()
item['id'] = cont
item['title'] = u''.join(site.xpath('h3/a/text()').extract())
item['url'] = u''.join(site.xpath('h3/a/#href').extract())
item['when'] = u''.join(site.xpath('div[#class="dades"]/dl/dd[1]/text()').extract())
item['where'] = u''.join(site.xpath('div[#class="dades"]/dl/dd[2]/span/a/text()').extract())
item['street'] = u''.join(site.xpath('div[#class="dades"]/dl/dd[3]/span/text()').extract())
item['phone'] = u''.join(site.xpath('div[#class="dades"]/dl/dd[4]/text()').extract())
items.append(item)
cont = cont + 1
return items
EDIT After searching in internet I found a code with which i can do that.
First of all, I have to get all the links, then I have to call another parse method.
def parse(self, response):
#Get all URL's
yield Request( url= _url, callback=self.parse_details )
def parse_details(self, response):
#Detailed information of each page
If you want use Rules because the page have a paginator, you should change def parse to def parse_start_url and then call this method through Rule. With this changes you make sure that the parser begins at the parse_start_url and the code it would be something like this:
rules = (
Rule(
SgmlLinkExtractor(
allow=(re.escape("index.php")),
restrict_xpaths=("//div[#class='paginador']")),
callback="parse_start_url",
follow=True),
)
def parse_start_url(self, response):
#Get all URL's
yield Request( url= _url, callback=self.parse_details )
def parse_details(self, response):
#Detailed information of each page
Thant's all folks
There is an easier way of achieving this. Click next on your link, and read the new url carefully:
http://guia.bcn.cat/index.php?pg=search&from=10&q=*:*&nr=10
By looking at the get data in the url (everything after the questionmark), and a bit of testing, we find that these mean
from=10 - Starting index
q=*:* - Search query
nr=10 - Number of items to display
This is how I would've done it:
Set nr=100 or higher. (1000 may do as well, just be sure that there is no timeout)
Loop from from=0 to 34300. This is above the number of entries currently. You may want to extract this value first.
Example code:
entries = 34246
step = 100
stop = entries - entries % step + step
for x in xrange(0, stop, step):
url = 'http://guia.bcn.cat/index.php?pg=search&from={}&q=*:*&nr={}'.format(x, step)
# Loop over all entries, and open links if needed

Categories