I need to parse all articles from one site. There is 1000+ shops in this site.
To get any one article I need a id_shop in cookies. I do that by Requests module
To get all 1000+ id_shops I need to parse Ajax forms.
Then I run 1000+ spiders for each shop this way:
def setup_crawler(domain):
spider = MySpider(domain=domain)
settings = get_project_settings()
crawler = Crawler(settings)
crawler.configure()
crawler.crawl(spider)
crawler.start()
So I have .py script which do all that steps and I run it by python MySpider.py. Everything works.
The broplem is: I can't run my spider simultaneously with another one's. I'm following that rule(listed here http://doc.scrapy.org/en/latest/topics/practices.html):
for domain in ['scrapinghub.com', 'insophia.com']:
setup_crawler(domain)
log.start()
reactor.run()
Instead of setup_crawler() I use MySpider.run().
I got that MySpider waits anothers.
I have two quastions:
1. How to run MySpider simultaneously with another one's?
2. I want to parse id_shops from ajax and run 1000+ spiders for each id_shop in one spider. Is it possible?
Related
I've written a scrapy crawler to fetch me some sweet sweet data and it works. I'm very impressed with myself for the achievement. I even created a jupyter notebook to process the data from the Json file I created.
But I've creatyed the program so that people at work can use it and getting them to navigate to a folder and use command lines isn't going to work so I wanted to make something that I can call on and then process afterwards. But for some reason Scrapy just isnt playing ball. I've found a few bits of help but once the crawl has been completed the json output I've requested doesn't appear. But when i command line it, it shows up.
def parse(self, response):
resp_dict = json.loads(response.body)
f = open(file_name, 'w')
json.dump(resp_dict, f, indent=4)
f.close()
this is the bit that works, sometimes. I just don't understand why it wont give me an output when called from a different script. I've also tried to add this in but I think i'm putting it in the wrong place.
settings = get_project_settings()
settings.set('FEED_FORMAT', 'json')
settings.set('FEED_URI', 'result.json')
I can successfully call the Scrapy Spider, i can see the terminal showing me what's going on. But I just can't get the json output. Literally tearing my hair out now.
process = CrawlerProcess(settings={
"FEEDS": {
"items.json": {"format": "json"},
}
})
process.crawl(HoggleSpider)
process.start()
I created a CrawlSpider that should follow all "internal" links up to a certain number of items / pages / time.
I am using multiprocessing.Pool to process a few pages at the same time (e.g. 6 workers).
I do call the CrawlSpider with the os.systemcomand from a separate python script:
import os
...
cmd = "scrapy crawl FullPageCrawler -t jsonlines -o "{0}" -a URL={1} -s DOWNLOAD_MAXSIZE=0 -s CLOSESPIDER_TIMEOUT=180 -s CLOSESPIDER_PAGECOUNT=150 -s CLOSESPIDER_ITEMCOUNT=100 -s DEPTH_LIMIT=5 -s DEPTH_PRIORITY=0 --nolog'.format(OUTPUT_FILE, url.strip())"
os.system(cmd)
It works pretty well for some of my pages but for specific pages the crawler is not following any of my set settings.
I tried to define the following (with what I think it does):
CLOSESPIDER_PAGECOUNT: The number of total pages he will follow?
CLOSESPIDER_ITEMCOUNT: Not sure about this one. What is the difference to PAGECOUNT?
CLOSESPIDER_TIMEOUT: Maximum time a crawler should be working.
Right now I face an example that has already crawled more than 4000 pages (or items?!) and is up for more than 1 hour.
Do I run into this because I defined everything at the same time?
Do I also need to define the same settings in the settings.py?
Can one of them be enough for me? (e.g. maximum uptime = 10minutes)
I tried using subprocess.Popen instead of os.system because it has a wait function but this was not working as expected as well.
After all using os.system is the most stable thing I tried and I want to stick with it. Only problem is scrapy
I tried searching for an answer on SO but couldnĀ“t find any help!
EDIT:
The above example ended up with 16.009 scraped subpages and over 333 MB.
After keep on searching for an answer I came up with the following solution.
Within my CrawlSpider I defined a maximum number of pages (self.max_cnt) that the scraper should stop at and a counter that is checked (self.max_counter) and increased for each page my scraper visited.
If the number of maximum pages is exceeded then the spider will be closed by raising scrapy.exception.CloseSpider.
class FullPageSpider(CrawlSpider):
name = "FullPageCrawler"
rules = (Rule(LinkExtractor(allow=()), callback="parse_all", follow=True),)
def __init__(self, URL=None, *args, **kwargs):
super(FullPageSpider, self).__init__(*args, **kwargs)
self.start_urls = [URL]
self.allowed_domains = ['{uri.netloc}'.format(uri=urlparse(URL))]
self.max_cnt = 250
self.max_counter = 0
def parse_all(self, response):
if self.max_counter < self.max_cnt:
self.max_cnt += 1
...
else:
from scrapy.exceptions import CloseSpider
raise CloseSpider('Exceeded the number of maximum pages!')
This works fine for me now but I would still be interested in the reason why the crawler settings are not working as expected.
Both Scrapy and Django Frameworks are standalone best framework of Python to build crawler and web applications with less code, Though still whenever You want to create a spider you always have to generate new code file and have to write same piece of code(though with some variation.) I was trying to integrate both. But stuck at a place where i need to send the status 200_OK that spider run successfully, and at the same time spider keep running and when it finish off it save data to database.
Though i know the API are already available with scrapyd. But i Wanted to make it more versatile. That lets you create crawler without writing multiple file. I thought The Crawlrunner https://docs.scrapy.org/en/latest/topics/practices.html would help in this,therefor try this thing also
t Easiest way to run scrapy crawler so it doesn't block the script
but it give me error that the builtins.ValueError: signal only works in main thread
Even though I get the response back from the Rest Framework. But Crawler failed to run due to this error does that mean i need to switch to main thread?
I am doing this with a simple piece of code
spider = GeneralSpider(pk)
runner = CrawlerRunner()
d = runner.crawl(GeneralSpider, pk)
d.addBoth(lambda _: reactor.stop())
reactor.run()
I ran scrapy spider in django view, and sharing my code.
settings_file_path = "scraping.settings" # Scrapy Project Setting
os.environ.setdefault('SCRAPY_SETTINGS_MODULE', settings_file_path)
settings = get_project_settings()
runner = CrawlerRunner(settings)
path = "/path/to/sample.py"
path = url.replace('.py', '')
path = url.replace('/', '.')
file_path = ".SampleSpider".format(path)
SampleSpider = locate(file_path)
d = runner.crawl(SampleSpider)
d.addBoth(lambda _: reactor.stop())
reactor.run()
I hope it's helpful.
I am using scrapy to crawl several websites. My spider isn't allowed to jump across domains. In this scenario, redirects make the crawler stop immediately. In most cases I know how to handle it, but this is a weird one.
The culprit is: http://www.cantonsd.org/
I checked its redirect pattern with http://www.wheregoes.com/ and it tells me it redirects to "/". This prevents the spider to enter its parse function. How can I handle this?
EDIT:
The code.
I invoke the spider using the APIs provided by scrapy here: http://doc.scrapy.org/en/latest/topics/practices.html#run-scrapy-from-a-script
The only difference is that my spider is custom. It is created as follows:
spider = DomainSimpleSpider(
start_urls = [start_url],
allowed_domains = [allowed_domain],
url_id = url_id,
cur_state = cur_state,
state_id_url_map = id_url,
allow = re.compile(r".*%s.*" % re.escape(allowed_path), re.IGNORECASE),
tags = ('a', 'area', 'frame'),
attrs = ('href', 'src'),
response_type_whitelist = [r"text/html", r"application/xhtml+xml", r"application/xml"],
state_abbr = state_abbrs[cur_state]
)
I think the problem is that the allowed_domains sees that / is not part of the list (which contains only cantonsd.org) and shuts down everything.
I'm not reporting the full spider code because it is not invoked at all, so it can't be the problem.
I'm working on a Scrapy app, where I'm trying to login to a site with a form that uses a captcha (It's not spam). I am using ImagesPipeline to download the captcha, and I am printing it to the screen for the user to solve. So far so good.
My question is how can I restart the spider, to submit the captcha/form information? Right now my spider requests the captcha page, then returns an Item containing the image_url of the captcha. This is then processed/downloaded by the ImagesPipeline, and displayed to the user. I'm unclear how I can resume the spider's progress, and pass the solved captcha and same session to the spider, as I believe the spider has to return the item (e.g. quit) before the ImagesPipeline goes to work.
I've looked through the docs and examples but I haven't found any ones that make it clear how to make this happen.
This is how you might get it to work inside the spider.
self.crawler.engine.pause()
process_my_captcha()
self.crawler.engine.unpause()
Once you get the request, pause the engine, display the image, read the info from the user& resume the crawl by submitting a POST request for login.
I'd be interested to know if the approach works for your case.
I would not create an Item and use the ImagePipeline.
import urllib
import os
import subprocess
...
def start_requests(self):
request = Request("http://webpagewithcaptchalogin.com/", callback=self.fill_login_form)
return [request]
def fill_login_form(self,response):
x = HtmlXPathSelector(response)
img_src = x.select("//img/#src").extract()
#delete the captcha file and use urllib to write it to disk
os.remove("c:\captcha.jpg")
urllib.urlretrieve(img_src[0], "c:\captcha.jpg")
# I use an program here to show the jpg (actually send it somewhere)
captcha = subprocess.check_output(r".\external_utility_solving_captcha.exe")
# OR just get the input from the user from stdin
captcha = raw_input("put captcha in manually>")
# this function performs the request and calls the process_home_page with
# the response (this way you can chain pages from start_requests() to parse()
return [FormRequest.from_response(response,formnumber=0,formdata={'user':'xxx','pass':'xxx','captcha':captcha},callback=self.process_home_page)]
def process_home_page(self, response):
# check if you logged in etc. etc.
...
What I do here is that I import urllib.urlretrieve(url) (to store the image), os.remove(file) (to delete the previous image), and subprocess.checoutput (to call an external command line utility to solve the captcha). The whole Scrapy infrastructure is not used in this "hack", because solving a captcha like this is always a hack.
That whole calling external subprocess thing could have been one nicer, but this works.
On some sites it's not possible to save the captcha image and you have to call the page in a browser and call a screen_capture utility and crop on an exact location to "cut out" the captcha. Now that is screenscraping.