Hi i am working on scrapy to scrape xml urls
Suppose below is my spider.py code
class TestSpider(BaseSpider):
name = "test"
allowed_domains = {"www.example.com"}
start_urls = [
"https://example.com/jobxml.asp"
]
def parse(self, response):
print response,"??????????????????????"
result:
2012-07-24 16:43:34+0530 [scrapy] INFO: Scrapy 0.14.3 started (bot: testproject)
2012-07-24 16:43:34+0530 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, MemoryUsage, SpiderState
2012-07-24 16:43:34+0530 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2012-07-24 16:43:34+0530 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2012-07-24 16:43:34+0530 [scrapy] DEBUG: Enabled item pipelines:
2012-07-24 16:43:34+0530 [test] INFO: Spider opened
2012-07-24 16:43:34+0530 [test] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2012-07-24 16:43:34+0530 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2012-07-24 16:43:34+0530 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2012-07-24 16:43:36+0530 [testproject] DEBUG: Retrying <GET https://example.com/jobxml.asp> (failed 1 times): 400 Bad Request
2012-07-24 16:43:37+0530 [test] DEBUG: Retrying <GET https://example.com/jobxml.asp> (failed 2 times): 400 Bad Request
2012-07-24 16:43:38+0530 [test] DEBUG: Gave up retrying <GET https://example.com/jobxml.asp> (failed 3 times): 400 Bad Request
2012-07-24 16:43:38+0530 [test] DEBUG: Crawled (400) <GET https://example.com/jobxml.asp> (referer: None)
2012-07-24 16:43:38+0530 [test] INFO: Closing spider (finished)
2012-07-24 16:43:38+0530 [test] INFO: Dumping spider stats:
{'downloader/request_bytes': 651,
'downloader/request_count': 3,
'downloader/request_method_count/GET': 3,
'downloader/response_bytes': 504,
'downloader/response_count': 3,
'downloader/response_status_count/400': 3,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2012, 7, 24, 11, 13, 38, 573931),
'scheduler/memory_enqueued': 3,
'start_time': datetime.datetime(2012, 7, 24, 11, 13, 34, 803202)}
2012-07-24 16:43:38+0530 [test] INFO: Spider closed (finished)
2012-07-24 16:43:38+0530 [scrapy] INFO: Dumping global stats:
{'memusage/max': 263143424, 'memusage/startup': 263143424}
Whether scrapy does n't work for xml scraping, if yes can anyone please provide me an example on how to scrape xml tag data
Thanks in advance...........
You have a specific spider made for scraping xml feeds. This is from scrapy documentation:
XMLFeedSpider example
These spiders are pretty easy to use, let’s have a look at one example:
from scrapy import log
from scrapy.contrib.spiders import XMLFeedSpider
from myproject.items import TestItem
class MySpider(XMLFeedSpider):
name = 'example.com'
allowed_domains = ['example.com']
start_urls = ['http://www.example.com/feed.xml']
iterator = 'iternodes' # This is actually unnecesary, since it's the default value
itertag = 'item'
def parse_node(self, response, node):
log.msg('Hi, this is a <%s> node!: %s' % (self.itertag, ''.join(node.extract())))
item = Item()
item['id'] = node.select('#id').extract()
item['name'] = node.select('name').extract()
item['description'] = node.select('description').extract()
return item
This is another way without scrapy:
This is a function used to download xml from given url, note that some import are not in here and this will also give you a nice progress for downloading xml file.
def get_file(self, dir, url, name):
s = urllib2.urlopen(url)
f = open('xml/test.xml','w')
meta = s.info()
file_size = int(meta.getheaders("Content-Length")[0])
print "Downloading: %s Bytes: %s" % (name, file_size)
current_file_size = 0
block_size = 4096
while True:
buf = s.read(block_size)
if not buf:
break
current_file_size += len(buf)
f.write(buf)
status = ("\r%10d [%3.2f%%]" %
(current_file_size, current_file_size * 100. / file_size))
status = status + chr(8)*(len(status)+1)
sys.stdout.write(status)
sys.stdout.flush()
f.close()
print "\nDone getting feed"
return 1
And then you parse that xml file that you downloaded and saved with iterparse, something like:
for event, elem in iterparse('xml/test.xml'):
if elem.tag == "properties":
print elem.text
That's just an example how do you go through xml tree.
Also, this is an old code of mine, so you would be better of using with for opening files.
Related
I am trying to scrap a site using Scrapy and Selenium.
I can get the web-browser to open using selenium but i am unable to get the start url into the web-browser. At present, the web-browser opens, does nothing and then closes whilst i get the error "<405 https://etc etc>: HTTP status code is not handled or not allowed".
Which, as far as i understand, confirms that i am not being able to pass the url to the web-browser.
What am i doing wrong here?
import scrapy
import time
from selenium import webdriver
from glassdoor.items import GlassdoorItem
class glassdoorSpider(scrapy.Spider):
name = "glassdoor"
allowed_domains = ["glassdoor.co.uk"]
start_urls = ["https://www.glassdoor.co.uk/Overview/Working-at-Greene-King-EI_IE10160.11,22.htm",
]
def __init__(self):
self.driver = webdriver.Chrome("C:/Users/andrew/Downloads/chromedriver_win32/chromedriver.exe")
def parse(self, response):
self.driver.get(response.url)
time.sleep(5)
for sel in response.xpath('//*[#id="EmpStats"]'):
item = GlassdoorItem()
item['rating'] = sel.xpath('//*[#class="notranslate ratingNum"]/text()').extract()
# item['recommend'] = sel.xpath('//*[#class="address"]/text()').extract()
# item['approval'] = sel.xpath('//*[#class="address"]/text()').extract()
yield item
# self.driver.close()
the logs I get from the above are:
2017-01-26 21:49:02 [scrapy] INFO: Scrapy 1.0.5 started (bot: glassdoor)
2017-01-26 21:49:02 [scrapy] INFO: Optional features available: ssl, http11
2017-01-26 21:49:02 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'glassdoor.spiders', 'SPIDER_MODULES': ['glassdoor.spiders'], 'BOT_NAME': 'glassdoor'}
2017-01-26 21:49:02 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2017-01-26 21:49:04 [selenium.webdriver.remote.remote_connection] DEBUG: POST http://127.0.0.1:58378/session {"requiredCapabilities": {}, "desiredCapabilities": {"platform": "ANY", "browserName": "chrome", "version": "", "chromeOptions": {"args": [], "extensions": []}, "javascriptEnabled": true}}
2017-01-26 21:49:06 [selenium.webdriver.remote.remote_connection] DEBUG: Finished Request
2017-01-26 21:49:06 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2017-01-26 21:49:06 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2017-01-26 21:49:06 [scrapy] INFO: Enabled item pipelines:
2017-01-26 21:49:06 [scrapy] INFO: Spider opened
2017-01-26 21:49:06 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-26 21:49:06 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-01-26 21:49:07 [scrapy] DEBUG: Crawled (405) <GET https://www.glassdoor.co.uk/Overview/Working-at-Greene-King-EI_IE10160.11,22.htm> (referer: None)
2017-01-26 21:49:07 [scrapy] DEBUG: Ignoring response <405 https://www.glassdoor.co.uk/Overview/Working-at-Greene-King-EI_IE10160.11,22.htm>: HTTP status code is not handled or not allowed
2017-01-26 21:49:07 [scrapy] INFO: Closing spider (finished)
2017-01-26 21:49:07 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 269,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 7412,
'downloader/response_count': 1,
'downloader/response_status_count/405': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 1, 26, 21, 49, 7, 388000),
'log_count/DEBUG': 5,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2017, 1, 26, 21, 49, 6, 572000)}
2017-01-26 21:49:07 [scrapy] INFO: Spider closed (finished)
ok, as suggest by both the replies, i was not passing the correct response to selenium.
Hence, by adding the line:
response1 = TextResponse(url=response.url, body=self.driver.page_source, encoding='utf-8')
and therefore changing one line of the code as well:
for sel in response1.xpath('//*[#id="EmpStats"]'):
the new code is (which works):
import scrapy
import time
from selenium import webdriver
from glassdoor.items import GlassdoorItem
class glassdoorSpider(scrapy.Spider):
header = {"User-Agent":"Mozilla/5.0 Gecko/20100101 Firefox/33.0"}
name = "glassdoor"
allowed_domains = ["glassdoor.co.uk"]
start_urls = ["https://www.glassdoor.co.uk/Overview/Working-at-Greene-King-EI_IE10160.11,22.htm",
]
def __init__(self):
self.driver = webdriver.Chrome("C:/Users/andrew/Downloads/chromedriver_win32/chromedriver.exe")
def parse(self, response):
self.driver.get(response.url)
response1 = TextResponse(url=response.url, body=self.driver.page_source, encoding='utf-8')
time.sleep(5)
for sel in response1.xpath('//*[#id="EmpStats"]'):
item = GlassdoorItem()
item['rating'] = sel.xpath('//*[#class="notranslate ratingNum"]/text()').extract()
# item['recommend'] = sel.xpath('//*[#class="address"]/text()').extract()
# item['approval'] = sel.xpath('//*[#class="address"]/text()').extract()
yield item
# self.driver.close()
I was trying to make an authenticated spider. I have referred almost every post here related to Scrapy authenticated spider, I couldn't find any answer for my issue. I have used the following code:
import scrapy
from scrapy.spider import BaseSpider
from scrapy.selector import Selector
from scrapy.http import FormRequest, Request
import logging
from PWC.items import PwcItem
class PwcmoneySpider(scrapy.Spider):
name = "PWCMoney"
allowed_domains = ["pwcmoneytree.com"]
start_urls = (
'https://www.pwcmoneytree.com/SingleEntry/singleComp?compName=Addicaid',
)
def parse(self, response):
return [scrapy.FormRequest("https://www.pwcmoneytree.com/Account/Login",
formdata={'UserName': 'user', 'Password': 'pswd'},
callback=self.after_login)]
def after_login(self, response):
if "authentication failed" in response.body:
self.log("Login failed", level=logging.ERROR)
return
# We've successfully authenticated, let's have some fun!
print("Login Successful!!")
return Request(url="https://www.pwcmoneytree.com/SingleEntry/singleComp?compName=Addicaid",
callback=self.parse_tastypage)
def parse_tastypage(self, response):
for sel in response.xpath('//div[#id="MainDivParallel"]'):
item = PwcItem()
item['name'] = sel.xpath('div[#id="CompDiv"]/h2/text()').extract()
item['location'] = sel.xpath('div[#id="CompDiv"]/div[#id="infoPane"]/div[#class="infoSlot"]/div/a/text()').extract()
item['region'] = sel.xpath('div[#id="CompDiv"]/div[#id="infoPane"]/div[#id="contactInfoDiv"]/div[1]/a[2]/text()').extract()
yield item
And I got the following output:
Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:\Python27\PWC>scrapy crawl PWCMoney -o test.csv
2016-04-29 11:37:35 [scrapy] INFO: Scrapy 1.0.5 started (bot: PWC)
2016-04-29 11:37:35 [scrapy] INFO: Optional features available: ssl, http11
2016-04-29 11:37:35 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'PW
C.spiders', 'FEED_FORMAT': 'csv', 'SPIDER_MODULES': ['PWC.spiders'], 'FEED_URI':
'test.csv', 'BOT_NAME': 'PWC'}
2016-04-29 11:37:35 [scrapy] INFO: Enabled extensions: CloseSpider, FeedExporter
, TelnetConsole, LogStats, CoreStats, SpiderState
2016-04-29 11:37:36 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddl
eware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultH
eadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMidd
leware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-04-29 11:37:36 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddlewa
re, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-04-29 11:37:36 [scrapy] INFO: Enabled item pipelines:
2016-04-29 11:37:36 [scrapy] INFO: Spider opened
2016-04-29 11:37:36 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 i
tems (at 0 items/min)
2016-04-29 11:37:36 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-04-29 11:37:37 [scrapy] DEBUG: Retrying <POST https://www.pwcmoneytree.com/
Account/Login> (failed 1 times): 500 Internal Server Error
2016-04-29 11:37:38 [scrapy] DEBUG: Retrying <POST https://www.pwcmoneytree.com/
Account/Login> (failed 2 times): 500 Internal Server Error
2016-04-29 11:37:38 [scrapy] DEBUG: Gave up retrying <POST https://www.pwcmoneyt
ree.com/Account/Login> (failed 3 times): 500 Internal Server Error
2016-04-29 11:37:38 [scrapy] DEBUG: Crawled (500) <POST https://www.pwcmoneytree
.com/Account/Login> (referer: None)
2016-04-29 11:37:38 [scrapy] DEBUG: Ignoring response <500 https://www.pwcmoneyt
ree.com/Account/Login>: HTTP status code is not handled or not allowed
2016-04-29 11:37:38 [scrapy] INFO: Closing spider (finished)
2016-04-29 11:37:38 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 954,
'downloader/request_count': 3,
'downloader/request_method_count/POST': 3,
'downloader/response_bytes': 30177,
'downloader/response_count': 3,
'downloader/response_status_count/500': 3,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 4, 29, 6, 7, 38, 674000),
'log_count/DEBUG': 6,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 3,
'scheduler/dequeued/memory': 3,
'scheduler/enqueued': 3,
'scheduler/enqueued/memory': 3,
'start_time': datetime.datetime(2016, 4, 29, 6, 7, 36, 193000)}
2016-04-29 11:37:38 [scrapy] INFO: Spider closed (finished)
Since I am new to python and Scrapy, I can't seem to understand the error, I hope someone here could help me.
So, I modified the code like this taking Rejected's advice, showing only the modified part:
allowed_domains = ["pwcmoneytree.com"]
start_urls = (
'https://www.pwcmoneytree.com/Account/Login',
)
def start_requests(self):
return [scrapy.FormRequest.from_response("https://www.pwcmoneytree.com/Account/Login",
formdata={'UserName': 'user', 'Password': 'pswd'},
callback=self.logged_in)]
And got the following error:
C:\Python27\PWC>scrapy crawl PWCMoney -o test.csv
2016-04-30 11:04:47 [scrapy] INFO: Scrapy 1.0.5 started (bot: PWC)
2016-04-30 11:04:47 [scrapy] INFO: Optional features available: ssl, http11
2016-04-30 11:04:47 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'PW
C.spiders', 'FEED_FORMAT': 'csv', 'SPIDER_MODULES': ['PWC.spiders'], 'FEED_URI':
'test.csv', 'BOT_NAME': 'PWC'}
2016-04-30 11:04:50 [scrapy] INFO: Enabled extensions: CloseSpider, FeedExporter
, TelnetConsole, LogStats, CoreStats, SpiderState
2016-04-30 11:04:54 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddl
eware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultH
eadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMidd
leware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-04-30 11:04:54 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddlewa
re, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-04-30 11:04:54 [scrapy] INFO: Enabled item pipelines:
Unhandled error in Deferred:
2016-04-30 11:04:54 [twisted] CRITICAL: Unhandled error in Deferred:
Traceback (most recent call last):
File "c:\python27\lib\site-packages\scrapy\cmdline.py", line 150, in _run_comm
and
cmd.run(args, opts)
File "c:\python27\lib\site-packages\scrapy\commands\crawl.py", line 57, in run
self.crawler_process.crawl(spname, **opts.spargs)
File "c:\python27\lib\site-packages\scrapy\crawler.py", line 153, in crawl
d = crawler.crawl(*args, **kwargs)
File "c:\python27\lib\site-packages\twisted\internet\defer.py", line 1274, in
unwindGenerator
return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
File "c:\python27\lib\site-packages\twisted\internet\defer.py", line 1128, in
_inlineCallbacks
result = g.send(result)
File "c:\python27\lib\site-packages\scrapy\crawler.py", line 72, in crawl
start_requests = iter(self.spider.start_requests())
File "C:\Python27\PWC\PWC\spiders\PWCMoney.py", line 16, in start_requests
callback=self.logged_in)]
File "c:\python27\lib\site-packages\scrapy\http\request\form.py", line 36, in
from_response
kwargs.setdefault('encoding', response.encoding)
exceptions.AttributeError: 'str' object has no attribute 'encoding'
2016-04-30 11:04:54 [twisted] CRITICAL:
As seen in your error log, it's the POST request to https://www.pwcmoneytree.com/Account/Login that is giving you a 500 error.
I tried making the same POST request manually, using POSTman. It gives the 500 error code and a HTML page containing this error message:
The required anti-forgery cookie "__RequestVerificationToken" is not present.
This is a feature many APIs and websites use to prevent CSRF attacks. If you still want to scrape the site, you would have to first visit the login form and get the proper cookie before logging in.
You're making your crawler do work for no reason. Your first request (initiated with start_urls) is being processed, and then the response is discarded. There's very rarely a reason to do this (unless making the request itself is a requirement).
Instead, change your start_urls to "https://www.pwcmoneytree.com/Account/Login", and change scrapy.FormRequest(...) to scrapy.FormRequest.from_response(...). You'll also need to also change the provided URL to being the received response (and possibly identify the desired form).
This will save you a wasted request, fetch/pre-fill other verification tokens, and clean up your code.
EDIT: Below is code you should be using. Note: You changed self.after_login to self.logged_in, so I left it as the newer change.
...
allowed_domains = ["pwcmoneytree.com"]
start_urls = (
'https://www.pwcmoneytree.com/Account/Login',
)
def parse(self, response):
return scrapy.FormRequest.from_response(response,
formdata={'UserName': 'user', 'Password': 'pswd'},
callback=self.logged_in)
...
I'd like to scrape parts of a number of very large websites using Scrapy. For instance, from northeastern.edu I would like to scrape only pages that are below the URL http://www.northeastern.edu/financialaid/, such as http://www.northeastern.edu/financialaid/contacts or http://www.northeastern.edu/financialaid/faq. I do not want to scrape the university's entire web site, i.e. http://www.northeastern.edu/faq should not be allowed.
I have no problem with URLs in the format financialaid.northeastern.edu (by simply limiting the allowed_domains to financialaid.northeastern.edu), but the same strategy doesn't work for northestern.edu/financialaid. (The whole spider code is actually longer as it loops through different web pages, I can provide details. Everything works apart from the rules.)
import scrapy
from scrapy.contrib.linkextractors import LinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
from test.items import testItem
class DomainSpider(CrawlSpider):
name = 'domain'
allowed_domains = ['northestern.edu/financialaid']
start_urls = ['http://www.northestern.edu/financialaid/']
rules = (
Rule(LxmlLinkExtractor(allow=(r"financialaid/",)), callback='parse_item', follow=True),
)
def parse_item(self, response):
i = testItem()
#i['domain_id'] = response.xpath('//input[#id="sid"]/#value').extract()
#i['name'] = response.xpath('//div[#id="name"]').extract()
#i['description'] = response.xpath('//div[#id="description"]').extract()
return i
The results look like this:
2015-05-12 14:10:46-0700 [scrapy] INFO: Scrapy 0.24.4 started (bot: finaid_scraper)
2015-05-12 14:10:46-0700 [scrapy] INFO: Optional features available: ssl, http11
2015-05-12 14:10:46-0700 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'finaid_scraper.spiders', 'SPIDER_MODULES': ['finaid_scraper.spiders'], 'FEED_URI': '/Users/hugo/Box Sync/finaid/ScrapedSiteText_check/Northeastern.json', 'USER_AGENT': 'stanford_sociology', 'BOT_NAME': 'finaid_scraper'}
2015-05-12 14:10:46-0700 [scrapy] INFO: Enabled extensions: FeedExporter, LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2015-05-12 14:10:46-0700 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-05-12 14:10:46-0700 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-05-12 14:10:46-0700 [scrapy] INFO: Enabled item pipelines:
2015-05-12 14:10:46-0700 [graphspider] INFO: Spider opened
2015-05-12 14:10:46-0700 [graphspider] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-05-12 14:10:46-0700 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-05-12 14:10:46-0700 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2015-05-12 14:10:46-0700 [graphspider] DEBUG: Redirecting (301) to <GET http://www.northeastern.edu/financialaid/> from <GET http://www.northeastern.edu/financialaid>
2015-05-12 14:10:47-0700 [graphspider] DEBUG: Crawled (200) <GET http://www.northeastern.edu/financialaid/> (referer: None)
2015-05-12 14:10:47-0700 [graphspider] DEBUG: Filtered offsite request to 'assistive.usablenet.com': <GET http://assistive.usablenet.com/tt/http://www.northeastern.edu/financialaid/index.html>
2015-05-12 14:10:47-0700 [graphspider] DEBUG: Filtered offsite request to 'www.northeastern.edu': <GET http://www.northeastern.edu/financialaid/index.html>
2015-05-12 14:10:47-0700 [graphspider] DEBUG: Filtered offsite request to 'www.facebook.com': <GET http://www.facebook.com/pages/Boston-MA/NU-Student-Financial-Services/113143082891>
2015-05-12 14:10:47-0700 [graphspider] DEBUG: Filtered offsite request to 'twitter.com': <GET https://twitter.com/NUSFS>
2015-05-12 14:10:47-0700 [graphspider] DEBUG: Filtered offsite request to 'nusfs.wordpress.com': <GET http://nusfs.wordpress.com/>
2015-05-12 14:10:47-0700 [graphspider] DEBUG: Filtered offsite request to 'northeastern.edu': <GET http://northeastern.edu/howto>
2015-05-12 14:10:47-0700 [graphspider] INFO: Closing spider (finished)
2015-05-12 14:10:47-0700 [graphspider] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 431,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 9574,
'downloader/response_count': 2,
'downloader/response_status_count/200': 1,
'downloader/response_status_count/301': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 5, 12, 21, 10, 47, 94112),
'log_count/DEBUG': 10,
'log_count/INFO': 7,
'offsite/domains': 6,
'offsite/filtered': 32,
'request_depth_max': 1,
'response_received_count': 1,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2015, 5, 12, 21, 10, 46, 566538)}
2015-05-12 14:10:47-0700 [graphspider] INFO: Spider closed (finished)
The second strategy I attempted was to use allow-rules of the LxmlLinkExtractor and to limit the crawl to everything within the sub-domain, but in that case the entire web page is scraped. (Deny-rules do work.)
import scrapy
from scrapy.contrib.linkextractors import LinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
from test.items import testItem
class DomainSpider(CrawlSpider):
name = 'domain'
allowed_domains = ['www.northestern.edu']
start_urls = ['http://www.northestern.edu/financialaid/']
rules = (
Rule(LxmlLinkExtractor(allow=(r"financialaid/",)), callback='parse_item', follow=True),
)
def parse_item(self, response):
i = testItem()
#i['domain_id'] = response.xpath('//input[#id="sid"]/#value').extract()
#i['name'] = response.xpath('//div[#id="name"]').extract()
#i['description'] = response.xpath('//div[#id="description"]').extract()
return i
I also tried:
rules = (
Rule(LxmlLinkExtractor(allow=(r"northeastern.edu/financialaid",)), callback='parse_site', follow=True),
)
The log is too long to be posted here, but these lines show that Scrapy ignores the allow-rule:
2015-05-12 14:26:06-0700 [graphspider] DEBUG: Crawled (200) <GET http://www.northeastern.edu/camd/journalism/2014/10/07/prof-leff-talks-american-press-holocaust/> (referer: http://www.northeastern.edu/camd/journalism/2014/10/07/prof-schroeder-quoted-nc-u-s-senate-debates-charlotte-observer/)
2015-05-12 14:26:06-0700 [graphspider] DEBUG: Crawled (200) <GET http://www.northeastern.edu/camd/journalism/tag/north-carolina/> (referer: http://www.northeastern.edu/camd/journalism/2014/10/07/prof-schroeder-quoted-nc-u-s-senate-debates-charlotte-observer/)
2015-05-12 14:26:06-0700 [graphspider] DEBUG: Scraped from <200 http://www.northeastern.edu/camd/journalism/2014/10/07/prof-leff-talks-american-press-holocaust/>
Here is my items.py:
from scrapy.item import Item, Field
class FinAidScraperItem(Item):
# define the fields for your item here like:
url=Field()
linkedurls=Field()
internal_linkedurls=Field()
external_linkedurls=Field()
http_status=Field()
title=Field()
text=Field()
I am using Mac, Python 2.7, Scrapy version 0.24.4. Similar questions have been posted before, but none of the suggested solutions fixed my problem.
You have a typo in your URLs used inside spiders, see:
northeastern
vs
northestern
Here is the spider that worked for me (it follows "financialaid" links only):
from scrapy.contrib.linkextractors import LinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
class DomainSpider(CrawlSpider):
name = 'domain'
allowed_domains = ['northeastern.edu']
start_urls = ['http://www.northeastern.edu/financialaid/']
rules = (
Rule(LinkExtractor(allow=r"financialaid/"), callback='parse_item', follow=True),
)
def parse_item(self, response):
print response.url
Note that I'm using LinkExtractor shortcut and a string for the allow argument value.
I've also edited your question and fixed the indentation problems assuming they were just "posting" issues.
I'm having a problem getting my Scrapy spider to run its callback method.
I don't think it's an indentation error which seems to be the case for the other previous posts, but perhaps it is and I don't know it? Any ideas?
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy import log
import tldextract
class CrawlerSpider(CrawlSpider):
name = "crawler"
def __init__(self, initial_url):
log.msg('initing...', level=log.WARNING)
CrawlSpider.__init__(self)
if not initial_url.startswith('http'):
initial_url = 'http://' + initial_url
ext = tldextract.extract(initial_url)
initial_domain = ext.domain + '.' + ext.tld
initial_subdomain = ext.subdomain + '.' + ext.domain + '.' + ext.tld
self.allowed_domains = [initial_domain, 'www.' + initial_domain, initial_subdomain]
self.start_urls = [initial_url]
self.rules = [
Rule(SgmlLinkExtractor(), callback='parse_item'),
Rule(SgmlLinkExtractor(allow_domains=self.allowed_domains), follow=True),
]
self._compile_rules()
def parse_item(self, response):
log.msg('parse_item...', level=log.WARNING)
hxs = HtmlXPathSelector(response)
links = hxs.select("//a/#href").extract()
for link in links:
log.msg('link', level=log.WARNING)
Sample output is below; it should show a warning message with "parse_item..." printed but it doesn't.
$ scrapy crawl crawler -a initial_url=http://www.szuhanchang.com/test.html
2013-02-19 18:03:24+0000 [scrapy] INFO: Scrapy 0.16.4 started (bot: crawler)
2013-02-19 18:03:24+0000 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2013-02-19 18:03:24+0000 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2013-02-19 18:03:24+0000 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2013-02-19 18:03:24+0000 [scrapy] DEBUG: Enabled item pipelines:
2013-02-19 18:03:24+0000 [scrapy] WARNING: initing...
2013-02-19 18:03:24+0000 [crawler] INFO: Spider opened
2013-02-19 18:03:24+0000 [crawler] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2013-02-19 18:03:24+0000 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2013-02-19 18:03:24+0000 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-02-19 18:03:25+0000 [crawler] DEBUG: Crawled (200) <GET http://www.szuhanchang.com/test.html> (referer: None)
2013-02-19 18:03:25+0000 [crawler] DEBUG: Filtered offsite request to 'www.20130219-0606.com': <GET http://www.20130219-0606.com/>
2013-02-19 18:03:25+0000 [crawler] INFO: Closing spider (finished)
2013-02-19 18:03:25+0000 [crawler] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 234,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 363,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2013, 2, 19, 18, 3, 25, 84855),
'log_count/DEBUG': 8,
'log_count/INFO': 4,
'log_count/WARNING': 1,
'request_depth_max': 1,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2013, 2, 19, 18, 3, 24, 805064)}
2013-02-19 18:03:25+0000 [crawler] INFO: Spider closed (finished)
Thanks in advance!
The start_urls of http://www.szuhanchang.com/test.html has only one anchor link, namely:
Test
which contains a link to the domain 20130219-0606.com and according to your allowed_domains of:
['szuhanchang.com', 'www.szuhanchang.com', 'www.szuhanchang.com']
this Request gets filtered by the OffsiteMiddleware:
2013-02-19 18:03:25+0000 [crawler] DEBUG: Filtered offsite request to 'www.20130219-0606.com': <GET http://www.20130219-0606.com/>
therefore parse_item will not be called for this url.
Changing the name of your callback to parse_start_url seems to work, although since the test URL provided is quite small, I cannot be sure if this will still be effective. Give it a go and let me know. :)
I'm trying to parse site with Scrapy. The urls I need to parse formed like this http://example.com/productID/1234/. This links can be found on pages with address like: http://example.com/categoryID/1234/. The thing is that my crawler fetches first categoryID page (http://www.example.com/categoryID/79/, as you can see from trace below), but nothing more. What am I doing wrong? Thank you.
Here is my Scrapy code:
# -*- coding: UTF-8 -*-
#THIRD-PARTY MODULES
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
class ExampleComSpider(CrawlSpider):
name = "example.com"
allowed_domains = ["http://www.example.com/"]
start_urls = [
"http://www.example.com/"
]
rules = (
# Extract links matching 'categoryID/xxx'
# and follow links from them (since no callback means follow=True by default).
Rule(SgmlLinkExtractor(allow=('/categoryID/(\d*)/', ), )),
# Extract links matching 'productID/xxx' and parse them with the spider's method parse_item
Rule(SgmlLinkExtractor(allow=('/productID/(\d*)/', )), callback='parse_item'),
)
def parse_item(self, response):
self.log('Hi, this is an item page! %s' % response.url)
Here is a trace of Scrapy:
2012-01-31 12:38:56+0000 [scrapy] INFO: Scrapy 0.14.1 started (bot: parsers)
2012-01-31 12:38:57+0000 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, MemoryUsage, SpiderState
2012-01-31 12:38:57+0000 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2012-01-31 12:38:57+0000 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2012-01-31 12:38:57+0000 [scrapy] DEBUG: Enabled item pipelines:
2012-01-31 12:38:57+0000 [example.com] INFO: Spider opened
2012-01-31 12:38:57+0000 [example.com] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2012-01-31 12:38:57+0000 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2012-01-31 12:38:57+0000 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2012-01-31 12:38:58+0000 [example.com] DEBUG: Crawled (200) <GET http://www.example.com/> (referer: None)
2012-01-31 12:38:58+0000 [example.com] DEBUG: Filtered offsite request to 'www.example.com': <GET http://www.example.com/categoryID/79/>
2012-01-31 12:38:58+0000 [example.com] INFO: Closing spider (finished)
2012-01-31 12:38:58+0000 [example.com] INFO: Dumping spider stats:
{'downloader/request_bytes': 199,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 121288,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2012, 1, 31, 12, 38, 58, 409806),
'request_depth_max': 1,
'scheduler/memory_enqueued': 1,
'start_time': datetime.datetime(2012, 1, 31, 12, 38, 57, 127805)}
2012-01-31 12:38:58+0000 [example.com] INFO: Spider closed (finished)
2012-01-31 12:38:58+0000 [scrapy] INFO: Dumping global stats:
{'memusage/max': 26992640, 'memusage/startup': 26992640}
It can be a difference between "www.example.com" and "example.com". If it helps, you can use them both this way
allowed_domains = ["www.example.com", "example.com"]
Replace:
allowed_domains = ["http://www.example.com/"]
with:
allowed_domains = ["example.com"]
That should do the trick.