Extracting links with title and url with selected links using scrapy - python

I want to extract only 10 links from this site https://dmoz-odp.org/Sports/Events/ this links can be found in the bottom of the page some of them are AOL, Google, etc
Here is my code:
import scrapy
class cr(scrapy.Spider):
name = 'prcr'
start_urls = ['https://dmoz-odp.org/Sports/Events/']
def parse(self, response):
items = '.alt-sites'
for i in response.css(items):
title=response.css('a::attr(title)').extract()
link=response.css('a::attr(href)').extract()
yield dict(title=title, titletext=link)
this works fine but I need only the last 10 links to be extracted so please tell how to do?

i have made few changes to your parse method (check the below code) and this should work just fine,
def parse(self, response):
items = '.alt-sites a'
for i in response.css(items):
title = i.css('::text').extract_first()
link = i.css('::attr(href)').extract_first()
yield dict(title=title, title_link=link)
hope this helps you.

Related

Where to add request.meta on my script to crawl once

I downloaded scrapy-crawl-once and I am trying to run it in my program. I want to scrape each book's url from the first page of http://books.toscrape.com/ and then scrape the title of the book from that url. I know I can scrape each book title from the first page, but as practice for scrapy-crawl-once, I wanted to do it this way. I already added the middlewares and need to know where to add request.meta. From doing some research, there isn't much codes out there for some example guidance so was hoping someone can help here. I learned the basics of python two weeks ago so struggling right now. I tried this, but the results hasn't changed. Can someone help me out please. I added [:2] so that if I change it to [:3], I can show myself that it works.
def parse(self, response):
all_the_books = response.xpath("//article[#class='product_pod']")
for div in all_the_books[:2]:
book_link = 'http://books.toscrape.com/' + div.xpath(".//h3/a/#href").get()
request = scrapy.Request(book_link, self.parse_book)
request.meta['book_link'] = book_link
yield request
def parse_book(self, response):
name = response.xpath("//div[#class='col-sm-6 product_main']/h1/text()").get()
yield {
'name': name,
}
Its docs says
To avoid crawling a particular page multiple times set
request.meta['crawl_once'] = True
so you need to do
def parse(self, response):
all_the_books = response.xpath("//article[#class='product_pod']")
for div in all_the_books[:2]:
book_link = 'http://books.toscrape.com/' + div.xpath(".//h3/a/#href").get()
request = scrapy.Request(book_link, self.parse_book)
request.meta['crawl_once'] = True
yield request
And it will not crawl that link again

how do I scrape form the website which has next button and also if it scrolling?

I'm trying to scrape all the data from a website called quotestoscrape. But, When I try to run my code it's only getting the one random quote. It should take at least all the data from that page only but it's only taking one. Also, if somehow I get the data from page 1 now what I want is to get the data from all the pages.
So how do I solve this error(which should take all the data from the page1)?
How do I take all the data which is present on the next pages?
items.py file
import scrapy
class QuotetutorialItem(scrapy.Item):
title = scrapy.Field()
author = scrapy.Field()
tag = scrapy.Field()
quotes_spider.py file
import scrapy
from ..items import QuotetutorialItem
class QuoteScrapy(scrapy.Spider):
name = 'quotes'
start_urls = [
'http://quotes.toscrape.com/'
]
def parse(self, response):
items = QuotetutorialItem()
all_div_quotes = response.css('div.quote')
for quotes in all_div_quotes:
title = quotes.css('span.text::text').extract()
author = quotes.css('.author::text').extract()
tag = quotes.css('.tag::text').extract()
items['title'] = title
items['author'] = author
items['tag'] = tag
yield items
Please tell me what change I can do?
As reported, it's missing an ident level on your yield. And to follow next pages, just add a check for the next button, and yield a request following it.
import scrapy
class QuoteScrapy(scrapy.Spider):
name = 'quotes'
start_urls = [
'http://quotes.toscrape.com/'
]
def parse(self, response):
items = {}
all_div_quotes = response.css('div.quote')
for quotes in all_div_quotes:
title = quotes.css('span.text::text').extract()
author = quotes.css('.author::text').extract()
tag = quotes.css('.tag::text').extract()
items['title'] = title
items['author'] = author
items['tag'] = tag
yield items
next_page = response.css('li.next a::attr(href)').extract_first()
if next_page:
yield response.follow(next_page)
As #LanteDellaRovere has correctly identified in a comment, the yield statement should be executed for each iteration of the for loop - which is why you are only seeing a single (presumably the last) link from each page.
As far as reading the continued pages, you could extract it from the <nav> element at the bottom of the page, but the structure is very simple - the links (when no tag is specified) are of the form
http://quotes.toscrape.com/page/N/
You will find that for N=1 you get the first page. So just access the URLs for increasing values of N until the attempt sees a 404 return should work as a simplistic solution.
Not knowing much about Scrapy I can't give you exact code, but the examples at https://docs.scrapy.org/en/latest/intro/tutorial.html#following-links are fairly helpful if you want a more sophisticated and Pythonic approach.

Best way to get follow links scrapy web crawler

So I'm trying to write a spider to continue clicking a next button on a webpage until it can't anymore (or until I add some logic to make it stop). The code below correctly gets the link to the next page but prints it only once. My question is why isn't it "following" the links that each next button leads to?
class MyprojectSpider(scrapy.Spider):
name = 'redditbot'
allowed_domains = ['https://www.reddit.com/r/nfl/?count=25&after=t3_7ax8lb']
start_urls = ['https://www.reddit.com/r/nfl/?count=25&after=t3_7ax8lb']
def parse(self, response):
hxs = HtmlXPathSelector(response)
next_page = hxs.select('//div[#class="nav-buttons"]//a/#href').extract()
if next_page:
yield Request(next_page[1], self.parse)
print(next_page[1])
To go to the next page, instead of printing the link you just need to yield a scrapy.Request object like the following code:
import scrapy
class MyprojectSpider(scrapy.Spider):
name = 'myproject'
allowed_domains = ['reddit.com']
start_urls = ['https://www.reddit.com/r/nfl/']
def parse(self, response):
posts = response.xpath('//div[#class="top-matter"]')
for post in posts:
# Get your data here
title = post.xpath('p[#class="title"]/a/text()').extract()
print(title)
# Go to next page
next_page = response.xpath('//span[#class="next-button"]/a/#href').extract_first()
if next_page:
yield scrapy.Request(response.urljoin(next_page), callback=self.parse)
Update: Previous code was wrong, needed to use the absolute URL and also some Xpaths were wrong, this new one should work.
Hope it helps!

Scrapy Request object is not always followed

I am creating a crawler with scraper.
My spider must go to start page which contains a list of links and link for next page.
Then, it must follow each link, go to this link, get infos and return to main page.
Finally, when spider followed each link of the page, it go to next page and begin again.
class jiwire(CrawlSpider):
name = "example"
allowed_domains = ["example.ndd"]
start_urls = ["page.example.ndd"]
rules = (Rule (SgmlLinkExtractor(allow=("next-page\.htm", ),restrict_xpaths=('//div[#class="paging"]',)), callback="parse_items", follow= True),)
def parse_items(self, response):
hxs = HtmlXPathSelector(response)
links = hxs.select('//td[#class="desc"]')
for link in links :
link = title.select("h3/a/#href").extract()
request = Request("http://v4.jiwire.com/" + str(name), callback=self.parse_sub)
return(request)
def parse_sub(self, response):
hxs = HtmlXPathSelector(response)
name = hxs.select('//div[#id="content"]/div[#class="header"]/h2/text()').extract()
print name
I exmplain my code : I defined a rule to follow next pages.
To follow each link of current page, I created a request object with the link getted and I return this object.
normally, for each request return, I must see "print name" in parse_sub function.
But only ONE link has been follow (and no all), I don't understand why.
It crawl fine the link, request object is created fine but it enter in parse_sub only once per page.
Can you help me ?
thanks a lot
I am back ! my problem come from my return statement.
The solution:
for link in links :
link = title.select("h3/a/#href").extract()
request = Request(link, callback=self.parse_hotspot)
yield request

Having trouble understanding where to look in source code, in order to create a web scraper

I am noob with python, been on and off teaching myself since this summer. I am going through the scrapy tutorial, and occasionally reading more about html/xml to help me understand scrapy. My project to myself is to imitate the scrapy tutorial in order to scrape http://www.gamefaqs.com/boards/916373-pc. I want to get a list of the thread title along with the thread url, should be simple!
My problem lies in not understanding xpath, and also html i guess. When viewing the source code for the gamefaqs site, I am not sure what to look for in order to pull the link and title. I want to say just look at the anchor tag and grab the text, but i am confused on how.
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from tutorial.items import DmozItem
class DmozSpider(BaseSpider):
name = "dmoz"
allowed_domains = ["http://www.gamefaqs.com"]
start_urls = ["http://www.gamefaqs.com/boards/916373-pc"]
def parse(self, response):
hxs = HtmlXPathSelector(response)
sites = hxs.select('//a')
items = []
for site in sites:
item = DmozItem()
item['link'] = site.select('a/#href').extract()
item['desc'] = site.select('text()').extract()
items.append(item)
return items
I want to change this to work on gamefaqs, so what would i put in this path?
I imagine the program returning results something like this
thread name
thread url
I know the code is not really right but can someone help me rewrite this to obtain the results, it would help me understand the scraping process better.
The layout and organization of a web page can change and deep tag based paths can be difficult to deal with. I prefer to pattern match the text of the links. Even if the link format changes, matching the new pattern is simple.
For gamefaqs the article links look like:
http://www.gamefaqs.com/boards/916373-pc/37644384
That's the protocol, domain name, literal 'boards' path. '916373-pc' identifies the forum area and '37644384' is the article ID.
We can match links for a specific forum area using using a regular expression:
reLink = re.compile(r'.*\/boards\/916373-pc\/\d+$')
if reLink.match(link)
Or any forum area using using:
reLink = re.compile(r'.*\/boards\/\d+-[^/]+\/\d+$')
if reLink.match(link)
Adding link matching to your code we get:
import re
reLink = re.compile(r'.*\/boards\/\d+-[^/]+\/\d+$')
def parse(self, response):
hxs = HtmlXPathSelector(response)
sites = hxs.select('//a')
items = []
for site in sites:
link = site.select('a/#href').extract()
if reLink.match(link)
item = DmozItem()
item['link'] = link
item['desc'] = site.select('text()').extract()
items.append(item)
return items
Many sites have separate summary and detail pages or description and file links where the paths match a template with an article ID. If needed, you can parse the forum area and article ID like this:
reLink = re.compile(r'.*\/boards\/(?P<area>\d+-[^/]+)\/(?P<id>\d+)$')
m = reLink.match(link)
if m:
areaStr = m.groupdict()['area']
idStr = m.groupdict()['id']
isStr will be a string which is fine for filling in a URL template, but if you need to calculate the previous ID, etc., then convert it to a number:
idInt = int(idStr)
I hope this helps.

Categories