I am trying to navigate to links and extracting data (the data is a href download link),this data should be added to a new field besides the previous fields of the first page (from where i got the links),but i am struggling how to do that
Firstable,i've created a parse and extracted all the links of the first page and added it to a field named "Links",this links are redirecting to a page that contains a download Button,so i need the real link of the download button,so what i did here is to create a for loop with the previous links and executing the function yield response.follow but it didn't go well.
import scrapy
class thirdallo(scrapy.Spider):
name = "thirdallo"
start_urls = [
'https://www.alloschool.com/course/alriadhiat-alaol-ibtdaii',
]
def parse(self, response):
yield {
'path': response.css('ol.breadcrumb li a::text').extract(),
'links': response.css('#top .default .er').xpath('#href').extract()
}
hrefs=response.css('#top .default .er').xpath('#href').extract()
for i in hrefs:
yield response.follow(i, callback=self.parse,meta={'finalLink' :response.css('a.btn.btn-primary').xpath('#href)').extract() })
In the #href you are trying to scrape out, it seems that you have some .rar links, that can't be parsed with the designated function.
Find my code below, with requests and lxml libraries:
>>> import requests
>>> from lxml import html
>>> s = requests.Session()
>>> resp = s.get('https://www.alloschool.com/course/alriadhiat-alaol-ibtdaii')
>>> doc = html.fromstring(resp.text)
>>> doc.xpath("//*[#id='top']//*//*[#class='default']//*//*[#class='er']/#href")
['https://www.alloschool.com/assets/documents/course-342/jthathat-alftra-1-aldora-1.rar', 'https://www.alloschool.com/assets/documents/course-342/jthathat-alftra-2-aldora-1.rar', 'https://www.alloschool.com/assets/documents/course-342/jthathat-alftra-3-aldora-2.rar', 'https://www.alloschool.com/assets/documents/course-342/jdadat-alftra-4-aldora-2.rar', 'https://www.alloschool.com/element/44905', 'https://www.alloschool.com/element/43081', 'https://www.alloschool.com/element/43082', 'https://www.alloschool.com/element/43083', 'https://www.alloschool.com/element/43084', 'https://www.alloschool.com/element/43085', 'https://www.alloschool.com/element/43086', 'https://www.alloschool.com/element/43087', 'https://www.alloschool.com/element/43088', 'https://www.alloschool.com/element/43080', 'https://www.alloschool.com/element/43089', 'https://www.alloschool.com/element/43090', 'https://www.alloschool.com/element/43091', 'https://www.alloschool.com/element/43092', 'https://www.alloschool.com/element/43093', 'https://www.alloschool.com/element/43094', 'https://www.alloschool.com/element/43095', 'https://www.alloschool.com/element/43096', 'https://www.alloschool.com/element/43097', 'https://www.alloschool.com/element/43098', 'https://www.alloschool.com/element/43099', 'https://www.alloschool.com/element/43100', 'https://www.alloschool.com/element/43101', 'https://www.alloschool.com/element/43102', 'https://www.alloschool.com/element/43103', 'https://www.alloschool.com/element/43104', 'https://www.alloschool.com/element/43105', 'https://www.alloschool.com/element/43106', 'https://www.alloschool.com/element/43107', 'https://www.alloschool.com/element/43108', 'https://www.alloschool.com/element/43109', 'https://www.alloschool.com/element/43110', 'https://www.alloschool.com/element/43111', 'https://www.alloschool.com/element/43112', 'https://www.alloschool.com/element/43113']
In your code, try this:
for i in hrefs:
if '.rar' not in i:
yield response.follow(i, callback=self.parse,meta={'finalLink' :response.css('a.btn.btn-primary').xpath('#href)').extract() })
Related
I want to scrape website link from "https://www.theknot.com/marketplace/bayside-bowl-portland-me-1031451", and I have used proper code of xpath i.e. response.xpath("//a[#title='website']/#href").get(), but it shows null result while scraping.
Some websites (in fact, many) use Javascript to generate content. That's why you need always check a source HTML code.
For this page you'll find that all information you need is inside a script tag with this text window.__INITIAL_STATE__ = . You need to get this text and next you can use json module to parse it:
import scrapy
import json
class TheknotSpider(scrapy.Spider):
name = 'theknot'
start_urls = ['https://www.theknot.com/marketplace/bayside-bowl-portland-me-1031451']
def parse(self, response):
initial_state_raw = response.xpath('//script[contains(., "window.__INITIAL_STATE__ = ")]/text()').re_first(r'window\.__INITIAL_STATE__ = (\{.+?)$')
# with open('Samples/TheKnot.json', 'w', encoding='utf-8') as f:
# f.write(initial_state_raw)
initial_state = json.loads(initial_state_raw)
website = initial_state['vendor']['vendor']['displayWebsiteUrl']
print(website)
I am very new to Web scraping. I have started using BeautifulSoup in Python. I wrote a code that would loop through a list of urls and get me the data i need. The code works fine for 10-12 links but I am not sure if the same code will be effective if the list has over 100 links. Is there any alternative way or any other library to get the data by inputing a list of large number of url's without harming the website in any way. Here is my code so far.
url_list = [url1, url2,url3, url4,url5]
mylist = []
for l in url_list:
url = l
res = get(url)
soup = BeautifulSoup(res.text, 'html.parser')
data = soup.find('pre').text
mylist.append(data)
Here's an example, maybe for you.
from simplified_scrapy import Spider, SimplifiedDoc, SimplifiedMain, utils
class MySpider(Spider):
name = 'my_spider'
start_urls = ['url1']
# refresh_urls = True # If you want to download the downloaded link again, please remove the "#" in the front
def __init__(self):
# If your link is stored elsewhere, read it out here.
self.start_urls = utils.getFileLines('you url file name.txt')
Spider.__init__(self,self.name) # Necessary
def extract(self, url, html, models, modelNames):
doc = SimplifiedDoc(html)
data = doc.select('pre>text()') # Extract the data you want.
return {'Urls': None, 'Data':{'data':data} } # Return the data to the framework, which will save it for you.
SimplifiedMain.startThread(MySpider()) # Start download
You can see more examples here, as well as the source code of Library simplified_scrapy: https://github.com/yiyedata/simplified-scrapy-demo
How can i go to link and get its sub links and again get its sub sub links?like for example,
I want to go to
"https://stackoverflow.com"
then extract its links e.g
['https://stackoverflow.com/questions/ask', 'https://stackoverflow.com/?tab=bounties']
and again go to that sub link and extract those sub links links.
I would recommend using Scrapy for this. With Scrapy, you create a spider object which then is run by the Scrapy module.
First, to get all the links on a page, you can create a Selector object and find all of the hyperlink objects using the XPath:
hxs = scrapy.Selector(response)
urls = hxs.xpath('*//a/#href').extract()
Since the hxs.xpath returns an iterable list of paths, you can just iterate over them directly without storing them in a variable. Also each URL found should be passed back into this function using the callback argument, allowing it to recursively find all the links within each URL found:
hxs = scrapy.Selector(response)
for url in hxs.xpath('*//a/#href').extract():
yield scrapy.http.Request(url=url, callback=self.parse)
Each path found might not contain the original URL, so that check has to be made:
if not ( url.startswith('http://') or url.startswith('https://') ):
url = "https://stackoverflow.com/" + url
Finally, the each URL can be passed to a different function to be parsed, in this case it's just printed:
self.handle(url)
All of this put together in a full Spider object looks like this:
import scrapy
class StackSpider(scrapy.Spider):
name = "stackoverflow.com"
# limit the scope to stackoverflow
allowed_domains = ["stackoverflow.com"]
start_urls = [
"https://stackoverflow.com/",
]
def parse(self, response):
hxs = scrapy.Selector(response)
# extract all links from page
for url in hxs.xpath('*//a/#href').extract():
# make it a valid url
if not ( url.startswith('http://') or url.startswith('https://') ):
url = "https://stackoverflow.com/" + url
# process the url
self.handle(url)
# recusively parse each url
yield scrapy.http.Request(url=url, callback=self.parse)
def handle(self, url):
print(url)
And the spider would be run like this:
$ scrapy runspider spider.py > urls.txt
Also, keep in mind that running this code will get you rate limited from stack overflow. You might want to find a different target for testing, ideally a site that you're hosting yourself.
I am building a crawl.spider to scrape statutory law data from the following website (https://www.azleg.gov/viewdocument/?docName=https://www.azleg.gov/ars/1/00101.htm). I am aiming to extract the statute text, which is contained in the following XPath [//div[#class = 'first']/p/text()]. This path should provide the statute text.
All of my scrapy requests are yielding incomplete html responses, such that when I search for the relevant xpath queries, it yields an empty list. However, when I use the requests library, the html downloads correctly.
Using XPath tester online, I've verified that my xpath queries should produce the desired content. Using scrapy shell, I've viewed the response object from scrapy in my browser - and it looks just like it does when I'm browsing natively. I've tried enabling middleware for both BeautifulSoup and Selenium, but neither has appeared to work.
Here's my crawl spider
class AZspider(CrawlSpider):
name = "arizona"
start_urls = [
"https://www.azleg.gov/viewdocument/?docName=https://www.azleg.gov/ars/1/00101.htm",
]
rule = (Rule(LinkExtractor(restrict_xpaths="//div[#class = 'article']"), callback="parse_stats_az", follow=True),)
def parse_stats_az(self, response):
statutes = response.xpath("//div[#class = 'first']/p")
yield{
"statutes":statutes
}
And here's the code that succsessfuly generated the correct response object
az_leg = requests.get("https://www.azleg.gov/viewdocument/?docName=https://www.azleg.gov/ars/1/00101.htm")
Hi guys I am very new in scraping data, I have tried the basic one. But my problem is I have 2 web page with same domain that I need to scrape
My Logic is,
First page www.sample.com/view-all.html
*This page open all the list of items and I need to get all the href attr of every item.
Second page www.sample.com/productpage.52689.html
*this is the link came from the first page so 52689 needs to change dynamically depending on the link provided by the first page.
I need to get all the data like title, description etc on the second page.
what I am thinking is for loop but Its not working on my end. I am searching on google but no one has the same problem as mine. please help me
import scrapy
class SalesItemSpider(scrapy.Spider):
name = 'sales_item'
allowed_domains = ['www.sample.com']
start_urls = ['www.sample.com/view-all.html', 'www.sample.com/productpage.00001.html']
def parse(self, response):
for product_item in response.css('li.product-item'):
item = {
'URL': product_item.css('a::attr(href)').extract_first(),
}
yield item`
Inside parse you can yield Request() with url and function's name to scrape this url in different function
def parse(self, response):
for product_item in response.css('li.product-item'):
url = product_item.css('a::attr(href)').extract_first()
# it will send `www.sample.com/productpage.52689.html` to `parse_subpage`
yield scrapy.Request(url=url, callback=self.parse_subpage)
def parse_subpage(self, response):
# here you parse from www.sample.com/productpage.52689.html
item = {
'title': ...,
'description': ...
}
yield item
Look for Request in Scrapy documentation and its tutorial
There is also
response.follow(url, callback=self.parse_subpage)
which will automatically add www.sample.com to urls so you don't have to do it on your own in
Request(url = "www.sample.com/" + url, callback=self.parse_subpage)
See A shortcut for creating Requests
If you interested in scraping then you should read docs.scrapy.org from first page to the last one.