I have two spiders in one Scrapy project. Spider1 crawls a list of page or an entire website and analyzes the content. Spider2 uses Splash to fetch URLs on google and pass that list to Spider1.
So, Spider1 crawls and analyze content and can be used without being called by Spider2
# coding: utf8
from scrapy.spiders import CrawlSpider
import scrapy
class Spider1(scrapy.Spider):
name = "spider1"
tokens = []
query = ''
def __init__(self, *args, **kwargs):
'''
This spider works with two modes,
if only one URL it crawls the entire website,
if a list of URLs only analyze the page
'''
super(Spider1, self).__init__(*args, **kwargs)
start_url = kwargs.get('start_url') or ''
start_urls = kwargs.get('start_urls') or []
query = kwargs.get('q') or ''
if google_query != '':
self.query = query
if start_url != '':
self.start_urls = [start_url]
if len(start_urls) > 0:
self.start_urls = start_urls
def parse(self, response):
'''
Analyze and store data
'''
if len(self.start_urls) == 1:
for next_page in response.css('a::attr("href")'):
yield response.follow(next_page, self.parse)
def closed(self, reason):
'''
Finalize crawl
'''
The code for Spider2
# coding: utf8
import scrapy
from scrapy_splash import SplashRequest
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
class Spider2(scrapy.Spider):
name = "spider2"
urls = []
page = 0
def __init__(self, *args, **kwargs):
super(Spider2, self).__init__(*args, **kwargs)
self.query = kwargs.get('q')
self.url = kwargs.get('url')
self.start_urls = ['https://www.google.com/search?q=' + self.query]
def start_requests(self):
splash_args = {
'wait:': 2,
}
for url in self.start_urls:
splash_args = {
'wait:': 1,
}
yield SplashRequest(url, self.parse, args=splash_args)
def parse(self, response):
'''
Extract URLs to self.urls
'''
self.page += 1
def closed(self, reason):
process = CrawlerProcess(get_project_settings())
for url in self.urls:
print(url)
if len(self.urls) > 0:
process.crawl('lexi', start_urls=self.urls, q=self.query)
process.start(False)
When running Spider2 I have this error : twisted.internet.error.ReactorAlreadyRunning and Spider1 is called without the list of URLs.
I tried using CrawlRunner as advised by Scrapy documentation but it's the same problem.
I tried using CrawlProcess inside parse method, it "works" but, I still have the error message. When using CrawlRunner inside parse method, it doesn't work.
Currently it is not possible to start a spider from another spider if you're using scrapy crawl command (see https://github.com/scrapy/scrapy/issues/1226). It is possible to start spider from a spider if you write a startup script yourselves - the trick is to use the same CrawlerProcess/CrawlerRunner instance.
I'd not do that though, you're fighting agains the framework. It'd be nice to support this use case, but it is not really supported now.
An easier way is to either rewrite your code to use a single Spider class, or to create a script (bash, Makefile, luigi/airflow if you want to be fancy) which runs scrapy crawl spider1 -o items.jl followed by scrapy crawl spider2; second spider can read items created by the first spider and generate start_requests accordingly.
FTR: combining SplashRequests and regular scrapy.Requests in a single spider is fully supported (it should just work), you don't have to create separate spiders for them.
Related
I tried using a generic Scrapy.spider to follow links, but it didn't work - so I hit upon the idea of simplifying the process by accessing the sitemap.txt instead, but that didn't work either!
I wrote a simple example (to help me understand the algorithm) of a spider to follow the sitemap specified on my site: https://legion-216909.appspot.com/sitemap.txt It is meant to navigate the URLs specified on the sitemap, print them out to screen and output the results into a links.txt file. The code:
import scrapy
from scrapy.spiders import SitemapSpider
class MySpider(SitemapSpider):
name = "spyder_PAGE"
sitemap_urls = ['https://legion-216909.appspot.com/sitemap.txt']
def parse(self, response):
print(response.url)
return response.url
I ran the above spider as Scrapy crawl spyder_PAGE > links.txt but that returned an empty text file. I have gone through the Scrapy docs multiple times, but there is something missing. Where am I going wrong?
SitemapSpider is expecting an XML sitemap format, causing the spider to exit with this error:
[scrapy.spiders.sitemap] WARNING: Ignoring invalid sitemap: <200 https://legion-216909.appspot.com/sitemap.txt>
Since your sitemap.txt file is just a simple list or URLs, it would be easier to just split them with a string method.
For example:
from scrapy import Spider, Request
class MySpider(Spider):
name = "spyder_PAGE"
start_urls = ['https://legion-216909.appspot.com/sitemap.txt']
def parse(self, response):
links = response.text.split('\n')
for link in links:
# yield a request to get this link
print(link)
# https://legion-216909.appspot.com/index.html
# https://legion-216909.appspot.com/content.htm
# https://legion-216909.appspot.com/Dataset/module_4_literature/Unit_1/.DS_Store
You only need to override _parse_sitemap(self, response) from SitemapSpider with the following:
from scrapy import Request
from scrapy.spiders import SitemapSpider
class MySpider(SitemapSpider):
sitemap_urls = [...]
sitemap_rules = [...]
def _parse_sitemap(self, response):
# yield a request for each url in the txt file that matches your filters
urls = response.text.splitlines()
it = self.sitemap_filter(urls)
for loc in it:
for r, c in self._cbs:
if r.search(loc):
yield Request(loc, callback=c)
break
I have made a spider using scrapy and I am trying to save download links into a (python) list, so I can later call a list entry using downloadlist[1].
But scrapy saves the urls as items instead of as a list. Is there a way to append each url into a list?
from scrapy.selector import HtmlXPathSelector
from scrapy.spider import BaseSpider
from scrapy.http import Request
import scrapy
from scrapy.linkextractors import LinkExtractor
DOMAIN = 'some-domain.com'
URL = 'http://' +str(DOMAIN)
linklist = []
class subtitles(scrapy.Spider):
name = DOMAIN
allowed_domains = [DOMAIN]
start_urls = [
URL
]
# First parse returns all the links of the website and feeds them to parse2
def parse(self, response):
hxs = HtmlXPathSelector(response)
for url in hxs.select('//a/#href').extract():
if not ( url.startswith('http://') or url.startswith('https://') ):
url= URL + url
yield Request(url, callback=self.parse2)
# Second parse selects only the links that contains download
def parse2(self, response):
le = LinkExtractor(allow=("download"))
for link in le.extract_links(response):
yield Request(url=link.url, callback=self.parse2)
print link.url
# prints list of urls, 'downloadlist' should be a list but isn't.
downloadlist = subtitles()
print downloadlist
You are misunderstanding how classes work, you are calling a class here not a function.
Think about it this way, your spider tht you define in class MySpider(Spider) is a template that is used by scrapy engine; when you start scrapy crawl myspider scrapy starts up an engine and reads your template to create an object that will be used to process various responses.
So your idea here can be simply translated to:
def parse2(self, response):
le = LinkExtractor(allow=("download"))
for link in le.extract_links(response):
yield {'url': link.urk}
If you call this with scrapy crawl myspider -o items.json you'll get all of the download links in json format.
There no reason to save downloads of to a list since it will be no longer of this spider template (class) that you wrote up and essentially it will have no purpose.
I'm using the latest version of scrapy (http://doc.scrapy.org/en/latest/index.html) and am trying to figure out how to make scrapy crawl only the URL(s) fed to it as part of start_url list. In most cases I want to crawl only 1 page, but in some cases there may be multiple pages that I will specify. I don't want it to crawl to other pages.
I've tried setting the depth level=1 but I'm not sure that in testing it accomplished what I was hoping to achieve.
Any help will be greatly appreciated!
Thank you!
2015-12-22 - Code update:
# -*- coding: utf-8 -*-
import scrapy
from generic.items import GenericItem
class GenericspiderSpider(scrapy.Spider):
name = "genericspider"
def __init__(self, domain, start_url, entity_id):
self.allowed_domains = [domain]
self.start_urls = [start_url]
self.entity_id = entity_id
def parse(self, response):
for href in response.css("a::attr('href')"):
url = response.urljoin(href.extract())
yield scrapy.Request(url, callback=self.parse_dir_contents)
def parse_dir_contents(self, response):
for sel in response.xpath("//body//a"):
item = GenericItem()
item['entity_id'] = self.entity_id
# gets the actual email address
item['emails'] = response.xpath("//a[starts-with(#href, 'mailto')]").re(r'mailto:\s*(.*?)"')
yield item
Below, in the first response, you mention using a generic spider --- isn't that what I'm doing in the code? Also are you suggesting I remove the
callback=self.parse_dir_contents
from the parse function?
Thank you.
looks like you are using CrawlSpider which is a special kind of Spider to crawl multiple categories inside pages.
For only crawling the urls specified inside start_urls just override the parse method, as that is the default callback of the start requests.
Below is a code for the spider that will scrape the title from a blog (Note: the xpath might not be the same for every blog)
Filename: /spiders/my_spider.py
class MySpider(scrapy.Spider):
name = "craig"
allowed_domains = ["www.blogtrepreneur.com"]
start_urls = ["http://www.blogtrepreneur.com/the-best-juice-cleanse-for-weight-loss/"]
def parse(self, response):
hxs = HtmlXPathSelector(response)
dive = response.xpath('//div[#id="tve_editor"]')
items = []
item = DmozItem()
item["title"] = response.xpath('//h1/text()').extract()
item["article"] = response.xpath('//div[#id="tve_editor"]//p//text()').extract()
items.append(item)
return items
The above code will only fetch the title and the article body of the given article.
I got the same problem, because I was using
import scrapy from scrapy.spiders import CrawlSpider
Then I changed to
import scrapy from scrapy.spiders import Spider
And change the class to
class mySpider(Spider):
I am trying to recursively crawl URL's on a webpage and then parse these pages to get all the tags on the page. I tried crawling a single page using scrapy without recursively going into URL's on the page and it worked, but when I tried to change my code to make it crawl the entire site it crawls the site but gives a really weird error at the end. The code for the spider and the error is given below, the code takes the list of domains to crawl as an argument in a file.
import scrapy
from tags.items import TagsItem
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor
class TagSpider(scrapy.Spider):
name = "getTags"
allowed_domains = []
start_urls = []
rules = (Rule(LinkExtractor(), callback='parse_tags', follow=True),)
def __init__(self, filename=None):
for line in open(filename, 'r').readlines():
self.allowed_domains.append(line)
self.start_urls.append('http://%s' % line)
def parse_start_url(self,response):
return self.parse_tags(response)
def parse_tags(self, response):
for sel in response.xpath('//*').re(r'</?\w+\s+[^>]*>'):
item = TagsItem()
item['tag'] = sel
item['url'] = response.url
print item
This is the error dump I am getting:
I want to get website addresses of some jobs, so I write a scrapy spider, I want to get all of the value with xpath://article/dl/dd/h2/a[#class="job-title"]/#href, but when I execute the spider with command :
scrapy spider auseek -a addsthreshold=3
the variable "urls" used to preserve values is empty, can someone help me to figure it,
here is my code:
from scrapy.contrib.spiders import CrawlSpider,Rule
from scrapy.selector import Selector
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.conf import settings
from scrapy.mail import MailSender
from scrapy.xlib.pydispatch import dispatcher
from scrapy.exceptions import CloseSpider
from scrapy import log
from scrapy import signals
from myProj.items import ADItem
import time
class AuSeekSpider(CrawlSpider):
name = "auseek"
result_address = []
addressCount = int(0)
addressThresh = int(0)
allowed_domains = ["seek.com.au"]
start_urls = [
"http://www.seek.com.au/jobs/in-australia/"
]
def __init__(self,**kwargs):
super(AuSeekSpider, self).__init__()
self.addressThresh = int(kwargs.get('addsthreshold'))
print 'init finished...'
def parse_start_url(self,response):
print 'This is start url function'
log.msg("Pipeline.spider_opened called", level=log.INFO)
hxs = Selector(response)
urls = hxs.xpath('//article/dl/dd/h2/a[#class="job-title"]/#href').extract()
print 'urls is:',urls
print 'test element:',urls[0].encode("ascii")
for url in urls:
postfix = url.getAttribute('href')
print 'postfix:',postfix
url = urlparse.urljoin(response.url,postfix)
yield Request(url, callback = self.parse_ad)
return
def parse_ad(self, response):
print 'this is parse_ad function'
hxs = Selector(response)
item = ADItem()
log.msg("Pipeline.parse_ad called", level=log.INFO)
item['name'] = str(self.name)
item['picNum'] = str(6)
item['link'] = response.url
item['date'] = time.strftime('%Y%m%d',time.localtime(time.time()))
self.addressCount = self.addressCount + 1
if self.addressCount > self.addressThresh:
raise CloseSpider('Get enough website address')
return item
The problems is:
urls = hxs.xpath('//article/dl/dd/h2/a[#class="job-title"]/#href').extract()
urls is empty when I tried to print it out, I just cant figure out why it doesn't work and how can I correct it, thanks for your help.
Here is a working example using selenium and phantomjs headless webdriver in a download handler middleware.
class JsDownload(object):
#check_spider_middleware
def process_request(self, request, spider):
driver = webdriver.PhantomJS(executable_path='D:\phantomjs.exe')
driver.get(request.url)
return HtmlResponse(request.url, encoding='utf-8', body=driver.page_source.encode('utf-8'))
I wanted to ability to tell different spiders which middleware to use so I implemented this wrapper:
def check_spider_middleware(method):
#functools.wraps(method)
def wrapper(self, request, spider):
msg = '%%s %s middleware step' % (self.__class__.__name__,)
if self.__class__ in spider.middleware:
spider.log(msg % 'executing', level=log.DEBUG)
return method(self, request, spider)
else:
spider.log(msg % 'skipping', level=log.DEBUG)
return None
return wrapper
settings.py:
DOWNLOADER_MIDDLEWARES = {'MyProj.middleware.MiddleWareModule.MiddleWareClass': 500}
for wrapper to work all spiders must have at minimum:
middleware = set([])
to include a middleware:
middleware = set([MyProj.middleware.ModuleName.ClassName])
You could have implemented this in a request callback (in spider) but then the http request would be happening twice. This isn't a full proof solution but it works for stuff that loads on .ready(). If you spend some time reading into selenium you can wait for specific event's to trigger before saving page source.
Another example: https://github.com/scrapinghub/scrapyjs
More info: What's the best way of scraping data from a website?
Cheers!
Scrapy does not evaluate Javascript. If you run the following command, you will see that the raw HTML does not contain the anchors you are looking for.
curl http://www.seek.com.au/jobs/in-australia/ | grep job-title
You should try PhantomJS or Selenium instead.
After examining the network requests in Chrome, the job listing appear to have originated from this JSONP request. It should be easy to retrieve whatever you need from it.