Getting Image URL's in Scrapy - python

I am very new to any form of coding. I started the learning process by attempting to make a simple crawler with Scrapy. It kinda works, but for some reason I can't get an image URL to output properly. It spits out some "data:image/gif;base64..." value instead of the actual link in the src attribute. I've looked for answers but I can't seem to find anything that gives me a definitive answer (Plus I may not fully understand the issue as well). Any help would be greatly appreciated.
def parse(self, response):
for data in response.css("a.styles__link--2pzz4"):
yield {
'title': data.css('a::attr(title)').get(),
'price': data.css('span::text').get(),
'url': data.css('a::attr(href)').get(),
'image url': data.css('img::attr(src)').get(),
}
next_page = response.css('li span a::attr(href)').get()
if next_page is not None:
next_page = response.urljoin(next_page)
yield scrapy.Request(next_page, callback=self.parse)

Can you give us link that you want to scrape?
Sometimes websites have lazy loads and hide normal links in other img attributes. For example, data-original, data-src, etc. Or keep links to images in jsons, stored in script on page.

Your website might be defining the image data as a base64 encoded blob using a data URI. Basically, the image data is embedded in the HTML, so there is no normal URL available.
Read more here: https://css-tricks.com/data-uris/

Related

Scraping images in a dynamic, JavaScript webpage using Scrapy and Splash

I am trying to scrape the link of a hi-res image from this link but the high-res version of the image can only be inspected upon clicking on the mid-sized link on the page, i.e after clicking "Click here to enlarge the image" (on the page, it's in Turkish).
Then I can inspect it with Chrome's "Developer Tools" and get the xpath/css selector. Everything is fine up to this point.
However, you know that in a JS page, you just can't type response.xpath("//blah/blah/#src") and get some data. I install Splash (with Docker pull) and configure my Scrapy setting.py files etc. to make it work (this YouTube link helped. no need to visit the link unless you wanna learn how to do it). ...and it worked on other JS webpages!
Just... I cannot pass this "Click here to enlarge the image!" thing and get the response. It gives me null response.
This is my code:
import scrapy
#import json
from scrapy_splash import SplashRequest
class TryMe(scrapy.Spider):
name = 'try_me'
allowed_domains = ['arabam.com']
def start_requests(self):
start_urls = ["https://www.arabam.com/ilan/sahibinden-satilik-hyundai-accent/bayramda-arabasiz-kalmaa/17753653",
]
for url in start_urls:
yield scrapy.Request(url=url,
callback=self.parse,
meta={'splash': {'endpoint': 'render.html', 'args': {'wait': 0.5}}})
# yield SplashRequest(url=url, callback=self.parse) # this works too
def parse(self, response):
## I can get this one's link successfully since it's not between js codes:
#IMG_LINKS = response.xpath('//*[#id="js-hook-for-ing-credit"]/div/div/a/img/#src').get()
## but this one just doesn't work:
IMG_LINKS = response.xpath("/html/body/div[7]/div/div[1]/div[1]/div/img/#src").get()
print(IMG_LINKS) # prints null :(
yield {"img_links":IMG_LINKS} # gives the items: img_links:null
Shell command which I'm using:
scrapy crawl try_me -O random_filename.jl
Xpath of the link I'm trying to scrape:
/html/body/div[7]/div/div[1]/div[1]/div/img
Image of this Xpath/link
I actually can see the link I want on the Network tab of my Developer Tools window when I click to enlarge it but I don't know how to scrape that link from that tab.
Possible Solution: I also will try to get the whole garbled body of my response, i.e response.text and apply a regular expression (e.g start with https://... and ends with .jpg) to it. This will definitely be looking for a needle in a haystack but it sounds quite practical as well.
Thanks!
As far as I understand you want to find the main image link. I checked out the page, it is inside the one of meta element:
<meta itemprop="image" content="https://arbstorage.mncdn.com/ilanfotograflari/2021/06/23/17753653/3c57b95d-9e76-42fd-b418-f81d85389529_image_for_silan_17753653_1920x1080.jpg">
Which you can get with
>>> response.css('meta[itemprop=image]::attr(content)').get()
'https://arbstorage.mncdn.com/ilanfotograflari/2021/06/23/17753653/3c57b95d-9e76-42fd-b418-f81d85389529_image_for_silan_17753653_1920x1080.jpg'
You don't need to use splash for this. If I check the website with splash, arabam.com gives permission denied error. I recommend not using splash for this website.
For a better solution for all images, You can parse the javascript. Images array loaded with js right here in the source.
To reach out that javascript try:
response.css('script::text').getall()[14]
This will give you the whole javascript string containing images array. You can parse it with built-in libraries like js2xml.
Check out how you can use it here https://github.com/scrapinghub/js2xml. If still have questions, you can ask. Good luck

When parsing many websites, how to make Scrapy stop yielding requests in the for loop if data is found and move to the next website

So, I am parsing emails from many websites
1)
I take them from the front page and from the contacts section ('kont' or 'cont' in hrefs)
There could be many links with 'kont' or 'cont' at the front page
I don't want to visit all of them in the "for" loop
I would like the program to go to another website when the data is found in one of those links (email_list_2 != []). how to do that?
2)
There is some redundancy in the code, I yield data at the front page because I am afraid the request from the for loop would be unsuccessful, in which case I will lose data from the front page.
Can I just yield {'site': site,
'email_list_1': email_list_1,
'email_list_2': []} if data is not found
or
{'site': site,
'email_list_1': email_list_1,
'email_list_2': ['xyz']} if data is found without double yielding?
Please help
Regards,
class QuotesSpider(scrapy.Spider):
name = 'enrichment'
start_urls = website_list
def parse(self, response):
site = response.url
data = response.text
email_list_1 = emailRegex.findall(data)
yield {'lvl': '1',
'site': site,
'email_list_1': email_list_1,
'email_list_2': [],
}
soup = BeautifulSoup(data,'lxml')
for link in soup.find_all('a'):
raw_url = link.get('href')
full_url = str(site) + str(raw_url)
if (re.search('cont', full_url) != None or
re.search('kont', full_url) != None):
yield scrapy.Request(url=full_url,
callback=self.parse_2d_level,
meta={'site': site,'email_list_1': email_list_1 }
)
def parse_2d_level(self, response):
site = response.meta['site']
email_list_1 = response.meta['email_list_1']
data_2 = response.text
email_list_2 = emailRegex.findall(data_2)
yield {'lvl': '2',
'site': site,
'email_list_1': email_list_1,
'email_list_2': email_list_2,
}
I'm not sure I fully understand your question, but here it goes:
1 - You want to scrape PAGE1, look for 'cont' or 'kont' and if these components exists, make a new request for PAGE2. In PAGE2 you search for a email_list_2 and yield results. You asked:
I would like the program to go to another website when the data is
found in one of those links (email_list_2 != []). how to do that?
What website do you want it to go? Is it a follow on the page you are already scraping? Is it another website in your start_urls?
At current state, after parsing PAGE2 (on parse_2d_level method) your spider will yield results, whether it found values for email_list_2 or not. If there are other requests on queue, scrapy will go on to execute those, if there aren't, the spider will end.
2- You want to make sure the data you already found before the loop is yielded in case the request from inside the loop fails. Since you said
the request from the for loop would be unsuccessful
I'll assume you are only worried about the REQUEST failure, there are other ways your parsing could fail.
For failed request you can catch and handle the issue with a scrapy signal called spider_error, take a look here.
3-You should take a look at Scrapy's selectors, they are a very powerful tool. You don't need beautiful soup for the parsing, and the Selectors will help a lot with the precision.

Where to add request.meta on my script to crawl once

I downloaded scrapy-crawl-once and I am trying to run it in my program. I want to scrape each book's url from the first page of http://books.toscrape.com/ and then scrape the title of the book from that url. I know I can scrape each book title from the first page, but as practice for scrapy-crawl-once, I wanted to do it this way. I already added the middlewares and need to know where to add request.meta. From doing some research, there isn't much codes out there for some example guidance so was hoping someone can help here. I learned the basics of python two weeks ago so struggling right now. I tried this, but the results hasn't changed. Can someone help me out please. I added [:2] so that if I change it to [:3], I can show myself that it works.
def parse(self, response):
all_the_books = response.xpath("//article[#class='product_pod']")
for div in all_the_books[:2]:
book_link = 'http://books.toscrape.com/' + div.xpath(".//h3/a/#href").get()
request = scrapy.Request(book_link, self.parse_book)
request.meta['book_link'] = book_link
yield request
def parse_book(self, response):
name = response.xpath("//div[#class='col-sm-6 product_main']/h1/text()").get()
yield {
'name': name,
}
Its docs says
To avoid crawling a particular page multiple times set
request.meta['crawl_once'] = True
so you need to do
def parse(self, response):
all_the_books = response.xpath("//article[#class='product_pod']")
for div in all_the_books[:2]:
book_link = 'http://books.toscrape.com/' + div.xpath(".//h3/a/#href").get()
request = scrapy.Request(book_link, self.parse_book)
request.meta['crawl_once'] = True
yield request
And it will not crawl that link again

How to do Next Page if its using Javascript in Scrapy

I am having a problem with the crawling the next button I tried the basic one but after checking the html code, its uses javascript I've tried different rules but nothing works here's the link for the website.
https://www2.hm.com/en_us/sale/shopbyproductladies/view-all.html
The next button name is "Load More Products"
here's my working code
def parse(self, response):
for product_item in response.css('li.product-item'):
url = "https://www2.hm.com/" + product_item.css('a::attr(href)').extract_first()
yield scrapy.Request(url=url, callback=self.parse_subpage)
def parse_subpage(self, response):
item = {
'title': response.xpath("normalize-space(.//h1[contains(#class, 'primary') and contains(#class, 'product-item-headline')]/text())").extract_first(),
'sale-price': response.xpath("normalize-space(.//span[#class='price-value']/text())").extract_first(),
'regular-price': response.xpath('//script[contains(text(), "whitePrice")]/text()').re_first("'whitePrice'\s?:\s?'([^']+)'"),
'photo-url': response.css('div.product-detail-main-image-container img::attr(src)').extract_first(),
'description': response.css('p.pdp-description-text::text').extract_first()
}
yield item
As already hinted in the comments, there's no need to involve JavaScript at all. If you visit the page and open up your browser's developer tools, you'll see there are XHR requests like this taking place:
https://www2.hm.com/en_us/sale/women/view-all/_jcr_content/main/productlisting_b48c.display.json?sort=stock&image-size=small&image=stillLife&offset=36&page-size=36
These requests return JSON data that are then rendered on the page using JavaScript. So you can just scrape data from these URLs using something like json.dumps(response.text). Control the products being returned by offset and page-size parameters. I assume you are done when you receive an empty JSON. Or, you can set offset=0 and page-size=9999 to get the data in one go (9999 is just an arbitrary number which is enough in this particular case).

Scrapy XHR Pagination on TripAdvisor

Although I've seen several similar questions here regarding this, none seem to precisely define the process for achieving this task. I borrowed largely from the Scrapy script located here but since it is over a year old I had to make adjustments to the xpath references.
My current code looks as such:
import scrapy
from tripadvisor.items import TripadvisorItem
class TrSpider(scrapy.Spider):
name = 'trspider'
start_urls = [
'https://www.tripadvisor.com/Hotels-g29217-Island_of_Hawaii_Hawaii-Hotels.html'
]
def parse(self, response):
for href in response.xpath('//div[#class="listing_title"]/a/#href'):
url = response.urljoin(href.extract())
yield scrapy.Request(url, callback=self.parse_hotel)
next_page = response.xpath('//div[#class="unified pagination standard_pagination"]/child::*[2][self::a]/#href')
if next_page:
url = response.urljoin(next_page[0].extract())
yield scrapy.Request(url, self.parse)
def parse_hotel(self, response):
for href in response.xpath('//div[starts-with(#class,"quote")]/a/#href'):
url = response.urljoin(href.extract())
yield scrapy.Request(url, callback=self.parse_review)
next_page = response.xpath('//div[#class="unified pagination "]/child::*[2][self::a]/#href')
if next_page:
url = response.urljoin(next_page[0].extract())
yield scrapy.Request(url, self.parse_hotel)
def parse_review(self, response):
item = TripadvisorItem()
item['headline'] = response.xpath('translate(//div[#class="quote"]/text(),"!"," ")').extract()[0][1:-1]
item['review'] = response.xpath('translate(//div[#class="entry"]/p,"\n"," ")').extract()[0]
item['bubbles'] = response.xpath('//span[contains(#class,"ui_bubble_rating")]/#alt').extract()[0]
item['date'] = response.xpath('normalize-space(//span[contains(#class,"ratingDate")]/#content)').extract()[0]
item['hotel'] = response.xpath('normalize-space(//span[#class="altHeadInline"]/a/text())').extract()[0]
return item
When running the spider in its current form, I scrape the first page of reviews for each hotel listed on the start_urls page but the pagination doesn't flip to the next page of reviews. From what I suspect, this is because of this line:
next_page = response.xpath('//div[#class="unified pagination "]/child::*[2][self::a]/#href')
Since these pages load dynamically, there is no existing href for the next page on the current page. Investigating further I've read that these requests are sending a POST request using XHR. By exploring the "Network" tab in Firefox "Inspect" I can see both a Request URL and Form Data that might be needed to flip the page according to other posts on SO regarding the same topic.
However, it seems that the other posts refer to a static URL starting point when trying to pass a FormRequest using Scrapy. With TripAdvisor, the URL will always change based on the name of the hotel we're looking at so I'm not sure how to chose a URL when using FormRequest to submit the form data: reqNum=1&changeSet=REVIEW_LIST (this form data also never seems to change from page to page).
Alternatively, there doesn't appear to be a way to extract the URL shown in the "Network" tab's "Request URL". These pages do have URLs that change from page to page but the way TripAdvisor is set up, I cannot seem to extract them from the source code. The review pages change by incrementing the part of the URL that is -orXX- where "XX" is a number. For example:
https://www.tripadvisor.com/Hotel_Review-g2312116-d113123-Reviews-Fairmont_Orchid_Hawaii-Puako_Kohala_Coast_Island_of_Hawaii_Hawaii.html
https://www.tripadvisor.com/Hotel_Review-g2312116-d113123-Reviews-or5-Fairmont_Orchid_Hawaii-Puako_Kohala_Coast_Island_of_Hawaii_Hawaii.html
https://www.tripadvisor.com/Hotel_Review-g2312116-d113123-Reviews-or10-Fairmont_Orchid_Hawaii-Puako_Kohala_Coast_Island_of_Hawaii_Hawaii.html
https://www.tripadvisor.com/Hotel_Review-g2312116-d113123-Reviews-or15-Fairmont_Orchid_Hawaii-Puako_Kohala_Coast_Island_of_Hawaii_Hawaii.html
So, my question is whether or not it is possible to paginate using the XHR request/form data or do I need to manually build a list of URLs for each hotel that adds the -orXX-?
Well I ended up discovering an xpath that apparently allowed pagination of the reviews, but it's funny because every time I checked the underlying HTML the href link never changed from referring to /Hotel_Review-g2312116-d113123-Reviews-or5-Fairmont_Orchid_Hawaii-Puako_Kohala_Coast_Island_of_Hawaii_Hawaii.html even if I was on page 10 for example. It seems the "-orXX-" part of the link always increments the XX by 5 so I'm not sure why this works.
All I did was change the line:
next_page = response.xpath('//div[#class="unified pagination "]/child::*[2][self::a]/#href')
to:
next_page = response.xpath('//link[#rel="next"]/#href')
and have >41K extracted reviews. Would love to get other's opinions on handling this problem in other situations.

Categories