scrapy log issue - python

i have multiple spiders in one project , problem is right now i am defining LOG_FILE in SETTINGS like
LOG_FILE = "scrapy_%s.log" % datetime.now()
what i want is scrapy_SPIDERNAME_DATETIME
but i am unable to provide spidername in log_file name ..
i found
scrapy.log.start(logfile=None, loglevel=None, logstdout=None)
and called it in each spider init method but its not working ..
any help would be appreciated

The spider's __init__() is not early enough to call log.start() by itself since the log observer is already started at this point; therefore, you need to reinitialize the logging state to trick Scrapy into (re)starting it.
In your spider class file:
from datetime import datetime
from scrapy import log
from scrapy.spider import BaseSpider
class ExampleSpider(BaseSpider):
name = "example"
allowed_domains = ["example.com"]
start_urls = ["http://www.example.com/"]
def __init__(self, name=None, **kwargs):
LOG_FILE = "scrapy_%s_%s.log" % (self.name, datetime.now())
# remove the current log
# log.log.removeObserver(log.log.theLogPublisher.observers[0])
# re-create the default Twisted observer which Scrapy checks
log.log.defaultObserver = log.log.DefaultObserver()
# start the default observer so it can be stopped
log.log.defaultObserver.start()
# trick Scrapy into thinking logging has not started
log.started = False
# start the new log file observer
log.start(LOG_FILE)
# continue with the normal spider init
super(ExampleSpider, self).__init__(name, **kwargs)
def parse(self, response):
...
And the output file might look like:
scrapy_example_2012-08-25 12:34:48.823896.log

There should be a BOT_NAME in your settings.py. This is the project/spider name. So in your case, this would be
LOG_FILE = "scrapy_%s_%s.log" % (BOT_NAME, datetime.now())
This is pretty much the same that Scrapy does internally
But why not use log.msg. The docs clearly state that this is for spider specific stuff. It might be easier to use this and just extract/grep/... the different spider log messages from a big log file.
A more compicated approach would be to get the location of the spider SPIDER_MODULES list and load all spiders inside these package.

You can use Scrapy's Storage URI parameters in your settings.py file for FEED URI.
%(name)s
%(time)s
For example: /tmp/crawled/%(name)s/%(time)s.log

Related

Remove INFO logs from scrapy.middleware and scrapy.crawler

Does anyone know if there is a way to set different levels for scrapy's modules ? I want to log scraped item and the requests sent in a log file, but the logs coming from the scrapy.middleware, scrapy.crawler and scrapy.utils.log modules are always the same and do not add value to the log file.
My biggest constraint is that I have to do everything outside of the spiders (in the pipelines, the settings.py file, etc.). I have more than 200 spiders and cannot possibly add code to each of them.
Scrapy's doc says it is possible to modify the level for a specific logger in the advanced customization section, but it does not seem to work when this is set in the settings.py file. My guess is that the logs from scrapy.middleware and scrapy.crawler are logged before the spider evaluates the settings.py file.
I have read scrapy's doc extensively but I cannot seem to find the answer. I don't want to have to recreate my own loggers since some of Scrapy's logs are useful, like the ones logging requests sent and errors.
I can provide code extracts if necessary. Thank you.
You can create a scrapy extension that manipulates the various log levels setting them to higher values for the ones you do not want to appear. The first 3 logs that come from scrapy.utils.log are run before scrapy get's to loading it's extensions, so those 3 I am not sure what to do beyond turning logging off entirely and implementing the logs yourself.
Here is an example of the extension:
extension.py
import logging
from scrapy.exceptions import NotConfigured
from scrapy import signals
logger = logging.getLogger(__name__)
class CustomLogExtension:
def __init__(self):
self.level = logging.WARNING
self.modules = ['scrapy.utils.log', 'scrapy.middleware',
'scrapy.extensions.logstats', 'scrapy.statscollectors',
'scrapy.core.engine', 'scrapy.core.scraper',
'scrapy.crawler', 'scrapy.extensions',
__name__]
for module in self.modules:
logger = logging.getLogger(module)
logger.setLevel(self.level)
#classmethod
def from_crawler(cls, crawler):
if not crawler.settings.getbool('CUSTOM_LOG_EXTENSION'):
raise NotConfigured
ext = cls()
crawler.signals.connect(
ext.spider_opened, signal=signals.spider_opened
)
return ext
def spider_opened(self, spider):
logger.debug("This log should not appear.")
Then in your settings.py
settings.py
CUSTOM_LOG_EXTENSION = True
EXTENSIONS = {
'scrapy.extensions.telnet.TelnetConsole': None,
'my_project_name.extension.CustomLogExtension': 1,
}
The example above removes pretty much all of the logs produced by scrapy. If you only want to keep the request logs, then just remove scrapy.core.engine from the self.modules list in the Extension constructor.

Reading settings in spider scrapy

I wrote a small scrapy spider. Following is my code
class ElectronicsSpider(scrapy.Spider):
name = "electronics"
allowed_domains = ["www.olx.com"]
start_urls = ['http://www.olx.com/']
def parse(self, response):
pass
My question is, I want to read the name,allowed_domains and start_urls using setting. How can i do this?
I tried importing
from scrapy.settings import Settings
also tried this
def __init__(self,crawler):
self.settings = crawler.settings
but I got none/error. Help me to read settings in my spider?
from scrapy.utils.project import get_project_settings
settings=get_project_settings()
print settings.get('NAME')
Using this code we can read data from settings file...
self.settings is not yet initiated in __init__(). You can check self.settings in start_requests().
def start_requests(self):
print self.settings
I think if you want to access scrapy settings.py then answer from #Sellamani is good. But I guess name,allowed_domains and start_urls are not variables defined in settings.py. But if you want to have the same knd of arrangement then make your own config file like this, yourown.cfg :
[Name]
crawler_name=electronics
[DOMAINS]
allowed_domains=http://example.com
and then in your program use ConfigParser module like this to access yourown.cfg :
import ConfigParser
config = ConfigParser.ConfigParser()
config.read('yourown.cfg') # Assuming it is at the same location
name=config.getint('NAME', 'crawler_name')
allowed_domains=config.getint('DOMAINS', 'allowed_domains')

Can't override settings in Scrapy after the init function of a spider

Is it possible to override Scrapy settings after the init function of a spider?
For example if I want to get settings from db and I pass my query parameters as arguments from the cmdline.
def __init__(self, spider_id, **kwargs):
self.spider_id = spider_id
self.set_params(spider_id)
super(Base_Crawler, self).__init__(**kwargs)
def set_params(self):
#TODO
#makes a query in db
#get set variables from query result
#override settings
Technically you can "override" settings after initialization of spider however it would affect nothing because most of them applied earlier.
What you can actually do is to pass parameters to Spider as command-line options using -a and override project settings using -s, for ex.)
Spider:
class TheSpider(scrapy.Spider):
name = 'thespider'
def __init__(self, *args, **kwargs):
self.spider_id = kwargs.pop('spider_id', None)
super(TheSpider).__init__(*args, **kwargs)
CLI:
scrapy crawl thespider -a spider_id=XXX -s SETTTING_TO_OVERRIDE=YYY
If you need something more advanced consider to write custom runner wrapping your spider. Below is example from the docs:
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
process = CrawlerProcess(get_project_settings())
# 'followall' is the name of one of the spiders of the project.
process.crawl('followall', domain='scrapinghub.com')
process.start() # the script will block here until the crawling is finished
Just replace get_project_settings with your own routine that returns Settings instance.
Anyway, avoid of overloading of spider's code with non-scraping logic to keep it clean and reusable.

How to specify log file name with spider's name in scrapy?

I'm using scrapy,in my scrapy project,I created several spider classes,as the official document said,I used this way to specify log file name:
def logging_to_file(file_name):
"""
#rtype: logging
#type file_name:str
#param file_name:
#return:
"""
import logging
from scrapy.utils.log import configure_logging
configure_logging(install_root_handler=False)
logging.basicConfig(
filename=filename+'.txt',
filemode='a',
format='%(levelname)s: %(message)s',
level=logging.DEBUG,
)
Class Spider_One(scrapy.Spider):
name='xxx1'
logging_to_file(name)
......
Class Spider_Two(scrapy.Spider):
name='xxx2'
logging_to_file(name)
......
Now,if I start Spider_One,everything is correct!But,if I start Spider Two,the log file of Spider Two will also be named with the name of Spider One!
I have searched many answers from google and stackoverflow,but unfortunately,none worked!
I am using python 2.7 & scrapy 1.1!
Hope anyone can help me!
It's because you initiate logging_to_file every time when you load up your package. You are using a class variable here where you should use instance variable.
When python loads in your package or module ir loads every class and so on.
class MyClass:
# everything you do here is loaded everytime when package is loaded
name = 'something'
def __init__(self):
# everything you do here is loaded ONLY when the object is created
# using this class
To resolve your issue just move logging_to_file function call to your spiders __init__() method.
class MyClass(Spider):
name = 'xx1'
def __init__(self):
super(MyClass, self).__init__()
logging_to_file(self.name)
One option is to define the LOG_FILE setting in Spider.custom_settings:
from scrapy import Spider
class MySpider(Spider):
name = "myspider"
start_urls = ["https://toscrape.com"]
custom_settings = {
"LOG_FILE": f"{name}.log",
}
def parse(self, response):
pass
However, because logging starts before the spider custom settings are read, the first few lines of the log will still go into the standard error output.

Scrapy spider not following links when using Celery

I'm a writing a crawler in Python that crawls all pages in a given domain, as part of a domain-specific search engine . I'am using Django, Scrapy, and Celery for achieving this. The scenario is as follows:
I receive a domain name from the user and call the crawl task inside the view, passing the domain as an argument:
crawl.delay(domain)
The task itself just calls a function that starts the crawling process:
from .crawler.crawl import run_spider
from celery import shared_task
#shared_task
def crawl(domain):
return run_spider(domain)
run_spider starts the crawling process, as in this SO answer, replacing MySpider with WebSpider.
WebSpider inherits from CrawlSpider and I'm using it now just to test functionality. The only rule defined takes an SgmlLinkExtractor instance and a callback function parse_page which simply extracts the response url and the page title, populates a new DjangoItem (HTMLPageItem) with them and saves it into the database (not so efficient, I know).
from urlparse import urlparse
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from ..items import HTMLPageItem
from scrapy.selector import Selector
from scrapy.contrib.spiders import Rule, CrawlSpider
class WebSpider(CrawlSpider):
name = "web"
def __init__(self, **kw):
super(WebSpider, self).__init__(**kw)
url = kw.get('domain') or kw.get('url')
if not (url.startswith('http://') or url.startswith('https://')):
url = "http://%s/" % url
self.url = url
self.allowed_domains = [urlparse(url).hostname.lstrip('www.')]
self.start_urls = [url]
self.rules = [
Rule(SgmlLinkExtractor(
allow_domains=self.allowed_domains,
unique=True), callback='parse_page', follow=True)
]
def parse_start_url(self, response):
return self.parse_page(response)
def parse_page(self, response):
sel = Selector(response)
item = HTMLPageItem()
item['url'] = response.request.url
item['title'] = sel.xpath('//title/text()').extract()[0]
item.save()
return item
The problem is the crawler only crawls the start_urls and does not follow links (or call the callback function) when following this scenario and using Celery. However calling run_spider through python manage.py shell works just fine!
Another problem is that Item Pipelines and logging are not working with Celery. This is making debugging much harder. I think these problems might be related.
So after inspecting Scrapy's code and enabling Celery logging, by inserting these two lines in web_spider.py:
from celery.utils.log import get_task_logger
logger = get_task_logger(__name__)
I was able to locate the problem:
In the initialization function of WebSpider:
super(WebSpider, self).__init__(**kw)
The __init__ function of the parent CrawlSpider calls the _compile_rules function which in short copies the rules from self.rules to self._rules while making some changes. self._rules is what the spider uses when it checks for rules . Calling the initialization function of CrawlSpider before defining the rules led to an empty self._rules, hence no links were followed.
Moving the super(WebSpider, self).__init__(**kw) line to the last line of WebSpider's __init__ fixed the problem.
Update: There is a little mistake in code from the previously mentioned SO answer. It causes the reactor to hang after second call. The fix is simple, in WebCrawlerScript's __init__ method, simply move this line:
self.crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
out of the if statement, as suggested in the comments there.
Update 2: I finally got pipelines to work! It was not a Celery problem. I realized that the settings module wasn't being read. It was simply an import problem. To fix it:
Set the environment variable SCRAPY_SETTINGS_MODULE in your django project's settings module myproject/settings.py:
import os
os.environ['SCRAPY_SETTINGS_MODULE'] = 'myapp.crawler.crawler.settings'
In your Scrapy settings module crawler/settings.py, add your Scrapy project path to sys.path so that relative imports in the settings file would work:
import sys
sys.path.append('/absolute/path/to/scrapy/project')
Change the paths to suit your case.

Categories