Scrapy: populate items with item loaders over multiple pages - python

I'm trying to crawl and scrape multiple pages, given multiple urls. I am testing with Wikipedia, and to make it easier I just used the same Xpath selector for each page, but I eventually want to use many different Xpath selectors unique to each page, so each page has its own separate parsePage method.
This code works perfectly when I don't use item loaders, and just populate items directly. When I use item loaders, the items are populated strangely, and it seems to be completely ignoring the callback assigned in the parse method and only using the start_urls for the parsePage methods.
import scrapy
from scrapy.http import Request
from scrapy import Spider, Request, Selector
from testanother.items import TestItems, TheLoader
class tester(scrapy.Spider):
name = 'vs'
handle_httpstatus_list = [404, 200, 300]
#Usually, I only get data from the first start url
start_urls = ['https://en.wikipedia.org/wiki/SANZAAR','https://en.wikipedia.org/wiki/2016_Rugby_Championship','https://en.wikipedia.org/wiki/2016_Super_Rugby_season']
def parse(self, response):
#item = TestItems()
l = TheLoader(item=TestItems(), response=response)
#when I use an item loader, the url in the request is completely ignored. without the item loader, it works properly.
request = Request("https://en.wikipedia.org/wiki/2016_Rugby_Championship", callback=self.parsePage1, meta={'loadernext':l}, dont_filter=True)
yield request
request = Request("https://en.wikipedia.org/wiki/SANZAAR", callback=self.parsePage2, meta={'loadernext1': l}, dont_filter=True)
yield request
yield Request("https://en.wikipedia.org/wiki/2016_Super_Rugby_season", callback=self.parsePage3, meta={'loadernext2': l}, dont_filter=True)
def parsePage1(self,response):
loadernext = response.meta['loadernext']
loadernext.add_xpath('title1', '//*[#id="firstHeading"]/text()')
return loadernext.load_item()
#I'm not sure if this return and load_item is the problem, because I've tried yielding/returning to another method that does the item loading instead and the first start url is still the only url scraped.
def parsePage2(self,response):
loadernext1 = response.meta['loadernext1']
loadernext1.add_xpath('title2', '//*[#id="firstHeading"]/text()')
return loadernext1.load_item()
def parsePage3(self,response):
loadernext2 = response.meta['loadernext2']
loadernext2.add_xpath('title3', '//*[#id="firstHeading"]/text()')
return loadernext2.load_item()
Here's the result when I don't use item loaders:
{'title1': [u'2016 Rugby Championship'],
'title': [u'SANZAAR'],
'title3': [u'2016 Super Rugby season']}
Here's the a bit of the log with item loaders:
{'title2': u'SANZAAR'}
2016-09-24 14:30:43 [scrapy] DEBUG: Crawled (200) <GET https://en.wikipedia.org/wiki/2016_Rugby_Championship> (referer: https://en.wikipedia.org/wiki/SANZAAR)
2016-09-24 14:30:43 [scrapy] DEBUG: Crawled (200) <GET https://en.wikipedia.org/wiki/2016_Rugby_Championship> (referer: https://en.wikipedia.org/wiki/2016_Rugby_Championship)
2016-09-24 14:30:43 [scrapy] DEBUG: Scraped from <200 https://en.wikipedia.org/wiki/2016_Super_Rugby_season>
{'title2': u'SANZAAR', 'title3': u'SANZAAR'}
2016-09-24 14:30:43 [scrapy] DEBUG: Crawled (200) <GET https://en.wikipedia.org/wiki/SANZAAR> (referer: https://en.wikipedia.org/wiki/2016_Rugby_Championship)
2016-09-24 14:30:43 [scrapy] DEBUG: Crawled (200) <GET https://en.wikipedia.org/wiki/2016_Rugby_Championship> (referer: https://en.wikipedia.org/wiki/2016_Super_Rugby_season)
2016-09-24 14:30:43 [scrapy] DEBUG: Crawled (200) <GET https://en.wikipedia.org/wiki/2016_Super_Rugby_season> (referer: https://en.wikipedia.org/wiki/2016_Rugby_Championship)
2016-09-24 14:30:43 [scrapy] DEBUG: Crawled (200) <GET https://en.wikipedia.org/wiki/2016_Super_Rugby_season> (referer: https://en.wikipedia.org/wiki/2016_Super_Rugby_season)
2016-09-24 14:30:43 [scrapy] DEBUG: Scraped from <200 https://en.wikipedia.org/wiki/2016_Rugby_Championship>
{'title1': u'SANZAAR', 'title2': u'SANZAAR', 'title3': u'SANZAAR'}
2016-09-24 14:30:43 [scrapy] DEBUG: Scraped from <200 https://en.wikipedia.org/wiki/2016_Rugby_Championship>
{'title1': u'2016 Rugby Championship'}
2016-09-24 14:30:43 [scrapy] DEBUG: Scraped from <200 https://en.wikipedia.org/wiki/SANZAAR>
{'title1': u'2016 Rugby Championship', 'title2': u'2016 Rugby Championship'}
2016-09-24 14:30:43 [scrapy] DEBUG: Scraped from <200 https://en.wikipedia.org/wiki/2016_Rugby_Championship>
{'title1': u'2016 Super Rugby season'}
2016-09-24 14:30:43 [scrapy] DEBUG: Crawled (200) <GET https://en.wikipedia.org/wiki/SANZAAR> (referer: https://en.wikipedia.org/wiki/2016_Super_Rugby_season)
2016-09-24 14:30:43 [scrapy] DEBUG: Scraped from <200 https://en.wikipedia.org/wiki/2016_Super_Rugby_season>
{'title1': u'2016 Rugby Championship',
'title2': u'2016 Rugby Championship',
'title3': u'2016 Rugby Championship'}
2016-09-24 14:30:43 [scrapy] DEBUG: Scraped from <200 https://en.wikipedia.org/wiki/2016_Super_Rugby_season>
{'title1': u'2016 Super Rugby season', 'title3': u'2016 Super Rugby season'}
2016-09-24 14:30:43 [scrapy] DEBUG: Scraped from <200 https://en.wikipedia.org/wiki/SANZAAR>
{'title1': u'2016 Super Rugby season',
'title2': u'2016 Super Rugby season',
'title3': u'2016 Super Rugby season'}
2016-09-24 14:30:43 [scrapy] INFO: Clos
What exactly is going wrong? Thanks!

One issue is that you're passing multiple references of a same item loader instance into multiple callbacks, e.g. there are two yield request instructions in parse.
Also, in the following-up callbacks, the loader is still using the old response object, e.g. in parsePage1 the item loader is still operating on the response from parse.
In most of the cases it is not suggested to pass item loaders to another callback. Alternatively, you might find it better to pass item objects directly.
Here's a short (and incomplete) example, by editing your code:
def parse(self, response):
l = TheLoader(item=TestItems(), response=response)
request = Request(
"https://en.wikipedia.org/wiki/2016_Rugby_Championship",
callback=self.parsePage1,
meta={'item': l.load_item()},
dont_filter=True
)
yield request
def parsePage1(self,response):
loadernext = TheLoader(item=response.meta['item'], response=response)
loadernext.add_xpath('title1', '//*[#id="firstHeading"]/text()')
return loadernext.load_item()

Related

Why doesn't a callback get executed immediately upon calling yield in Scrapy?

I am building a web scraper to scrape remote jobs. The spider behaves in a way that I don't understand and I'd appreciate it if someone could explain why.
Here's the code for the spider:
import scrapy
import time
class JobsSpider(scrapy.Spider):
name = "jobs"
start_urls = [
"https://stackoverflow.com/jobs/remote-developer-jobs"
]
already_visited_links = []
def parse(self, response):
jobs = response.xpath("//div[contains(#class, 'job')]")
links_to_next_pages = response.xpath("//a[contains(#class, 's-pagination--item')]").css("a::attr(href)").getall()
# visit each job page (as I do in the browser) and scrape the relevant information (Job title etc.)
for job in jobs:
job_id = int(job.xpath('#data-jobid').extract_first()) # there will always be one element
# now visit the link with the job_id and get the info
job_link_to_visit = "https://stackoverflow.com/jobs?id=" + str(job_id)
request = scrapy.Request(job_link_to_visit,
callback=self.parse_job)
yield request
# sleep for 10 seconds before requesting the next page
print("Sleeping for 10 seconds...")
time.sleep(10)
# go to the next job listings page (if you haven't already been there)
# not sure if this solution is the best since it has a loop which has a recursion in it
for link_to_next_page in links_to_next_pages:
if link_to_next_page not in self.already_visited_links:
self.already_visited_links.append(link_to_next_page)
yield response.follow(link_to_next_page, callback=self.parse)
print("End of parse method")
def parse_job(self, response):
print(response.body)
print("Sleeping for 10 seconds...")
time.sleep(10)
pass
Here's the output (the relevant parts):
Sleeping for 10 seconds...
End of parse method
2021-04-29 20:49:55 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://stackoverflow.com/jobs?id=525754> (referer: https://stackoverflow.com/jobs/remote-developer-jobs)
2021-04-29 20:49:55 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://stackoverflow.com/jobs?id=525748> (referer: https://stackoverflow.com/jobs/remote-developer-jobs)
2021-04-29 20:49:55 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://stackoverflow.com/jobs?id=497114> (referer: https://stackoverflow.com/jobs/remote-developer-jobs)
2021-04-29 20:49:55 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://stackoverflow.com/jobs?id=523136> (referer: https://stackoverflow.com/jobs/remote-developer-jobs)
2021-04-29 20:49:55 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://stackoverflow.com/jobs?id=525730> (referer: https://stackoverflow.com/jobs/remote-developer-jobs)
In parse_job
2021-04-29 20:50:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://stackoverflow.com/jobs/remote-developer-jobs?so_source=JobSearch&so_medium=Internal> (referer: https://stackoverflow.com/jobs/remote-developer-jobs)
2021-04-29 20:50:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://stackoverflow.com/jobs?id=523319> (referer: https://stackoverflow.com/jobs/remote-developer-jobs)
2021-04-29 20:50:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://stackoverflow.com/jobs?id=522480> (referer: https://stackoverflow.com/jobs/remote-developer-jobs)
2021-04-29 20:50:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://stackoverflow.com/jobs?id=511761> (referer: https://stackoverflow.com/jobs/remote-developer-jobs)
2021-04-29 20:50:06 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://stackoverflow.com/jobs?id=522483> (referer: https://stackoverflow.com/jobs/remote-developer-jobs)
2021-04-29 20:50:06 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://stackoverflow.com/jobs?id=249610> (referer: https://stackoverflow.com/jobs/remote-developer-jobs)
2021-04-29 20:50:06 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://stackoverflow.com/jobs?id=522481> (referer: https://stackoverflow.com/jobs/remote-developer-jobs)
In parse_job
In parse_job
In parse_job
In parse_job
...
I don't understand why the parse method gets executed fully before the parse_job method gets called. From my understanding, as soon as I yield a job from jobs, the parse_job method should get called. The spider should go over each page of job listings and visit the details of each individual job at that job listing page. However, the description I gave in the previous sentence doesn't match the output. I also don't understand why are there multiple GET requests between each call to the parse_job method.
Can someone explain what is going on here?
Scrapy is event driven. Firstly requests are queued by Scheduler. Queued requests are passed to Downloader. The callback function is called when the response is downloaded and ready and then, response will be passed as the first argument to the callback function.
You are blocking callbacks by using time.sleep(). In the presented logs, after the first callback call the procedure was blocked for 10 seconds in parsed_job() but at the same time Downloader was working and getting responses ready for callback function as it is obvious in successive DEBUG: Crawled (200) logs after the first parse_job() call. So, while callback was blocked, Downloader finished its job and the responses were queued to be fed to callback function. As it is obvious in the last part of the logs, passing response to callback function was bottle necked.
If you want to put delay between requests, it's better to use DOWNLOAD_DELAY settings instead of time.sleep().
Take a look at this for more details about Scrapy architecture.

Can't make my first spider run,any advice?

This is my first time using scrapy and maybe the third in python, so i'm a noob.
The problem with this code is that it doesn't even enter the page.
I have tried to use:
scrapy shell 'https://www.zooplus.es/shop/tienda_perros/pienso_perros/pienso_hipoalergenico'
This works and then using...
response.xpath('//*[#class="product__varianttitle ui-text--small"]')
... I can retrieve information.
My code:
import scrapy
class ZooplusSpider(scrapy.Spider):
name = 'Zooplus'
allowed_domains = ['zooplus.es']
start_urls = ['https://www.zooplus.es/shop/tienda_perros/pienso_perros/pienso_hipoalergenico']
def parse(self, response):
item= scrapy.Item()
item['nombre']=response.xpath('//*[#class="product__varianttitle ui-text--small"]')
item['preciooriginal']=response.xpath('//*[#class="product__prices_col prices"]')
item['preciorebaja']=response.xpath('//*[#class="product__specialprice__text"]')
return item
The error message says:
2019-08-30 21:16:57 [scrapy.core.engine] INFO: Spider opened
2019-08-30 21:16:57 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-08-30 21:16:57 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-08-30 21:16:57 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.zooplus.es/robots.txt> (referer: None)
2019-08-30 21:16:57 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://www.zooplus.es/shop/tienda_perros/pienso_perros/pienso_hipoalergenico> from <GET https://www.zooplus.es/shop/tienda_perros/pienso_perros/pienso_hipoalergenico/>
2019-08-30 21:16:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.zooplus.es/shop/tienda_perros/pienso_perros/pienso_hipoalergenico> (referer: None)
2019-08-30 21:16:58 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.zooplus.es/shop/tienda_perros/pienso_perros/pienso_hipoalergenico> (referer: None)
I think you haven't defined the fields for your items.py
the error is coming from item['nombre']
Either you should define the field in items.py or simply replace
item= scrapy.Item()
with item = dict()

scraping site logos

I have sites and I want to scrape their logos.
PROBLEM:
I have an outer class, in which I save all the data about the logos - urls, links, everything is working correct:
class PatternUrl:
def __init__(self, path_to_img="", list_of_conditionals=[]):
self.url_pattern = ""
self.file_url = ""
self.path_to_img = path_to_img
self.list_of_conditionals = list_of_conditionals
def find_obj(self, response):
for el in self.list_of_conditionals:
if el:
if self.path_to_img:
url = response
file_url = str(self.path_to_img)
print(file_url)
yield LogoScrapeItem(url=url, file_url=file_url)
class LogoSpider(scrapy.Spider):
....
def parse(self, response):
a = PatternUrl(response.css("header").xpath("//a[#href='"+response.url+'/'+"']/img/#src").extract_first(), [response.css("header").xpath("//a[#href='"+response.url+'/'+"']")] )
a.find_obj(response)
The problem is in the yield line
yield LogoScrapeItem(url=url, file_url=file_url)
For some reason when I comment this line, all the lines in this method are being executed.
Output when yield is commentated:
#yield LogoScrapeItem(url=url, file_url=file_url)
2017-12-25 11:09:32 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://time.com> (referer: None)
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAKQAAAAyCAYAAAD........
2017-12-25 11:09:32 [scrapy.core.engine] INFO: Closing spider (finished)
2017-12-25 11:09:32 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
Output when yield is not commentated:
yield LogoScrapeItem(url=url, file_url=file_url)
2017-12-25 11:19:28 [scrapy.core.engine] INFO: Spider opened
2017-12-25 11:19:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-12-25 11:19:28 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6024
2017-12-25 11:19:28 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://git-scm.com/robots.txt> (referer: None)
2017-12-25 11:19:28 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://git-scm.com/docs/git-merge> (referer: None)
2017-12-25 11:19:28 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://time.com/robots.txt> (referer: None)
2017-12-25 11:19:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://time.com> (referer: None)
2017-12-25 11:19:29 [scrapy.core.engine] INFO: Closing spider (finished)
2017-12-25 11:19:29 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 926,
QUESTION:
The function is not executed when there is a yield statement, why ?
Yield is designed to produce a generator.
It looks like you should run your find_obj as:
for x in a.find_obj(response):
instead.
For details on yield please see What does the "yield" keyword do?
Your find_obj method is actually a generator because of the yield keyword. For a thorough explanation on generators and yield I recommend this StackOverflow question.
In order to get results from your method you should call it in a manner similar to this :
for logo_scrape_item in a.find_obj(response):
# perform an action on your logo_scrape_item

Scrapy - Getting the data from filterbox(Python)

I got a problem with Scrapy. I need to get the all the city names from the red circled part in the image which I linked below. But with my code I can't return anything. I tried many alternatives yet no success. How can I solve this problem and get these city names ? The link to image and source code is below.
import scrapy
from scrapy.spiders import CrawlSpider
#from city_crawl.items import CityCrawlItem
class details(CrawlSpider):
name = "city_crawling"
start_urls = ['https://www.booking.com/searchresults.tr.html?label=gen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEKd2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM&sid=cfc09bd0db4d07c7b55902c6d0ae81a5&track_lsso=1&sb=1&src=index&src_elem=sb&error_url=https%3A%2F%2Fwww.booking.com%2Findex.tr.html%3Flabel%3Dgen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEKd2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM%3Bsid%3Dcfc09bd0db4d07c7b55902c6d0ae81a5%3Bsb_price_type%3Dtotal%26%3B&ss=isve%C3%A7&checkin_monthday=&checkin_month=&checkin_year=&checkout_monthday=&checkout_month=&checkout_year=&room1=A%2CA&no_rooms=1&group_adults=2&group_children=0']
def parse(self, response):
for content in response.xpath('//a[contains(#data-name, "uf")]'):
yield {
'text': content.css('span.filter_label::text').extract()
}
Image of the source which i need to parse the data. The red circled part in the left is what i need to get
Your for loop is to select the <a> element with class contain "uf", it will return nothing. Should select the element with data-name contain "uf", you can change your code like this:
for content in response.xpath('//a[contains(#data-name, "uf")]'):
yield {
'text': content.css('span.filter_label::text').extract()
}
Update:
I have tested your url links, you are right, it will return nothing. The root cause is that scrapy redirect three times, and finally went to wrong page, it scrawl in the wrong page "https://www.booking.com/country/se.tr.html", and this page is not same as the one showing in your image. the log is below:
2017-04-30 15:18:47 [scrapy] DEBUG: Redirecting (301) to <GET https://www.bookin
g.com/searchresults.tr.html?ss=isve%25C3%25A7> from <GET https://www.booking.com
/searchresults.tr.html?label=gen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEK
d2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM&sid=cfc09bd0db4d07c7b55902c6d0ae81a5&tra
ck_lsso=1&sb=1&src=index&src_elem=sb&error_url=https%3A%2F%2Fwww.booking.com%2Fi
ndex.tr.html%3Flabel%3Dgen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEKd2luZG
93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM%3Bsid%3Dcfc09bd0db4d07c7b55902c6d0ae81a5%3Bsb_
price_type%3Dtotal%26%3B&ss=isve%C3%A7&checkin_monthday=&checkin_month=&checkin_
year=&checkout_monthday=&checkout_month=&checkout_year=&room1=A%2CA&no_rooms=1&g
roup_adults=2&group_children=0>
2017-04-30 15:18:48 [scrapy] DEBUG: Redirecting (301) to <GET https://www.bookin
g.com/searchresults.tr.html?ss=isve%C3%A7> from <GET https://www.booking.com/sea
rchresults.tr.html?ss=isve%25C3%25A7>
2017-04-30 15:18:48 [scrapy] DEBUG: Redirecting (302) to <GET https://www.bookin
g.com/country/se.tr.html> from <GET https://www.booking.com/searchresults.tr.htm
l?ss=isve%C3%A7>
2017-04-30 15:18:49 [scrapy] DEBUG: Crawled (200) <GET https://www.booking.com/c
ountry/se.tr.html> (referer: None)
2017-04-30 15:18:49 [scrapy] INFO: Closing spider (finished)
Solution:
You could try to save the html file in your local PC like I did, named with "Booking.html", and then change your code to:
import scrapy
class CitiesSpider(scrapy.Spider):
name = "city_crawling"
start_urls = [
'file:///F:/algorithm%20study/python/StackOverFlow/Booking.html', # put the saved html file directory here
# 'https://www.booking.com/searchresults.tr.html?label=gen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEKd2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM&sid=cfc09bd0db4d07c7b55902c6d0ae81a5&track_lsso=1&sb=1&src=index&src_elem=sb&error_url=https%3A%2F%2Fwww.booking.com%2Findex.tr.html%3Flabel%3Dgen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEKd2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM%3Bsid%3Dcfc09bd0db4d07c7b55902c6d0ae81a5%3Bsb_price_type%3Dtotal%26%3B&ss=isve%C3%A7&checkin_monthday=&checkin_month=&checkin_year=&checkout_monthday=&checkout_month=&checkout_year=&room1=A%2CA&no_rooms=1&group_adults=2&group_children=0',
]
def parse(self, response):
#self.logger.info('A response from %s just arrived!', response.url)
for content in response.xpath('//a[contains(#data-name, "uf")]'):
#self.logger.info('TEST %s TEST', content.css('span.filter_label::text').extract())
yield {
'text': content.css('span.filter_label::text').extract()
}
Run the scrawl command in your scrapy project : scrapy crawl city_crawling, it will give you start to scrawl the information you want, check below logs and output:
2017-04-30 15:33:31 [scrapy] DEBUG: Crawled (200) <GET file:///F:/algorithm%20st
udy/python/StackOverFlow/Booking.html> (referer: None)
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nStockholm\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nG\xf6teborg\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nVisby\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nFalkenberg\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nMalm\xf6\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nLysekil\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nSimrishamn\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nLund\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nK\xf6pingsvik\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nBorgholm\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nJ\xf6nk\xf6ping\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nUppsala\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nF\xe4rjestaden\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nHelsingborg\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nRonneby\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nYstad\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nHalmstad\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nKivik\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nBorrby\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nFj\xe4llbacka\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nKarlskrona\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nGr\xe4nna\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nL\xf6ttorp\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\nNorrk\xf6ping\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/StackOverFlow/Booking.html>
{'text': [u'\n\xd6rebro\n']}
2017-04-30 15:33:31 [scrapy] INFO: Closing spider (finished)
`
def parse(self, response):
for content in response.xpath('//a[contains(#class, "uf")]'):
yield {
'text':content.css('span.filter_label::text').extract(),
}
`
U need to keep comma at the end of
"'text':content.css('span.filter_label::text').extract()"
def parse(self, response):
for content in response.css('a[data-name=uf)]'):
yield {
'text': content.css('span.filter_label::text').extract(),
}
Check it now
it works

Crawling redirected urls with scrapy

Im trying to use scrapy to crawl www.mywebsite.com.
www.mywebsite.com is hosted on a free host with the url www.mywebsite.freehost.com. I am redirecting the free host to my paid domain.
The problem here is that scrapy ignores the redirect and the end result is that 0 pages are scraped.
How do I tell scrapy that I need it to crawl the redirected url? I only need it to crawl the redirected url and not other urls that lead out of the website (like facebook pages etc.)
2016-11-27 14:48:42 [scrapy] INFO: Spider opened
2016-11-27 14:48:42 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-11-27 14:48:42 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-11-27 14:48:44 [scrapy] DEBUG: Crawled (200) <GET http://www.mywebsite.com/> (referer: None)
2016-11-27 14:48:44 [scrapy] DEBUG: Filtered offsite request to 'www.mywebsite.freehost.net': <GET www.mywebsite.freehost.net>
2016-11-27 14:48:44 [scrapy] INFO: Closing spider (finished)
2016-11-27 14:48:44 [scrapy] INFO: Dumping Scrapy stats:
The logs show that your request is being filtered:
DEBUG: Filtered offsite request to 'www.mywebsite.freehost.net': <GET www.mywebsite.freehost.net>
Add that domain freehost.net to your allowed_domains list, or remove allowed_domains from your spider to allow every domain.

Categories