How to extract C code from the HTML using scrapy? - python

I am using scrapy using anaconda command line, to extract C code from the github site (HTML). It is in the form of
I need to extract the data same as in left side of image.
I used XPATH to extract the required code
import scrapy
class TestCCodeSpider(scrapy.Spider):
name = 'test_c_code'
allowed_domains = ['github.com']
start_urls = ['http://github.com/gouravthakur39/beginners-C-program-examples/blob/master/AllTempScalesConv.c/']
custom_settings={'FEED_URI': "test_c.csv",
'FEED_FORMAT': 'csv'}
def parse(self, response):
print("processing:" +response.url)
notation = response.xpath("//table[#class='highlight tab-size js-file-line-container']/tr/td[#class='blob-code blob-code-inner js-file-line']/text()").extract()
text_td = response.xpath("//table[#class='highlight tab-size js-file-line-container']/tr/td[#class='blob-code blob-code-inner js-file-line']/span/text()").extract()
row_data=zip(notation, text_id)
for i in row_data:
scrapped_info = {
'notation': row_data[0],
'text': row_data[1],
}
yield scrapped_info
When I run the individually XPATHs it gives result right like
response.xpath("//table[#class='highlight tab-size js-file-line-container']/tr/td[#class='blob-code blob-code-inner js-file-line']/text()").extract()
and It gives
While for other extraction
response.xpath("//table[#class='highlight tab-size js-file-line-container']/tr/td[#class='blob-code blob-code-inner js-file-line']/span/text()").extract()
Anybody Can guide me, how can I access the complete code from HTML without facing any distortion in C code.
Thank in advance.

You can achieve using two ways -
A. Use this xpath instead - normalize-space(//div[#itemprop='text']). This gave me the desired result.
B. Crawl following URL instead -
https://raw.githubusercontent.com/gouravthakur39/beginners-C-program-examples/master/AllTempScalesConv.c/. Haven't checked the xpath for this though.
Hope this answers!

Related

Having trouble with finviz screener tool using Scrapy

Looked over a similar question that involves the same site, but it looks like my problem has different css/html to it. Read through the scrapy tutorial, but still having trouble getting the code to print.
import scrapy
class finvizSpider(scrapy.Spider):
name = "finviz"
start_urls = [
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-change",
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-ticker&r=21",
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-ticker&r=41",
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-ticker&r=61",
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-ticker&r=81",
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-ticker&r=101",
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-ticker&r=121",
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-ticker&r=141",
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-ticker&r=161",
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-ticker&r=181",
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-ticker&r=201"]
def parse(self, response):
data = response.xpath('//div[#id="screener-content"]/div/table/tbody').extract()
print(data)
Any help would be much appreciated. Thanks
EDIT:
The answer below allows me to run just one url in the scrapy shell and it switches the original url to the main page without the filters added on.
I am still unable to run this in python. I will attach the output.
There are two issues in your code:
First, Scrapy response does not contains <tbody> elements because it downloads the original HTML page. While, what you see in the browser is the modified HTML which adds <tbody> elements to tables.
Second, you have added an extra div element, //div[#id="screener-content"]/div/table/tbody (talking about bold one).
Try this XPath, '//div[#id="screener-content"]/table//tr/td/table//tr/td//text()' or just execute the below modified code.
Code
import scrapy
class finvizSpider(scrapy.Spider):
name = "finviz"
start_urls = [
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-change",
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-ticker&r=21",
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-ticker&r=41",
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-ticker&r=61",
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-ticker&r=81",
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-ticker&r=101",
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-ticker&r=121",
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-ticker&r=141",
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-ticker&r=161",
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-ticker&r=181",
"https://finviz.com/screener.ashx?v=111&f=cap_small,geo_usa,sh_avgvol_o300,sh_opt_option,sh_short_low&ft=4&o=-ticker&r=201"]
def parse(self, response):
tickers = response.xpath('//a[#class="screener-link-primary"]/text()').extract()
print(tickers)
Output Screenshot

Scrapy- can't extract data from h3

I'm starting around with Scrapy, and managed to extract some of the data I need. However, not everything is properly obtained. I'm applying the knowledge from the official tutorial found here, but it's not working. I've Googled around a bit, and also read this SO question but I'm fairly certain this isn't the problem here.
Anyhow, I'm trying to parse the product information from this webshop. I'm trying to obtain the product name, price, rrp, release date, category, universe, author and publisher. Here is the relevant CSS for one product: https://pastebin.com/9tqnjs7A. Here's my code. Everything with a #! at the end isn't working as expected.
import scrapy
import pprint
class ForbiddenPlanetSpider(scrapy.Spider):
name = "fp"
start_urls = [
'https://forbiddenplanet.com/catalog/?q=mortal%20realms&sort=release-date&page=1',
]
def parse(self, response):
for item in response.css("section.zshd-00"):
print(response.css)
name = item.css("h3.h4::text").get() #!
price = item.css("span.clr-price::text").get() + item.css("span.t-small::text").get()
rrp = item.css("del.mqr::text").get()
release = item.css("dd.mzl").get() #!
category = item.css("li.inline-list__item::text").get() #!
universe = item.css("dt.txt").get() #!
authors = item.css("a.SubTitleItems").get() #!
publisher = item.css("dd.mzl").get() #!
pprint.pprint(dict(name=name,
price=price,
rrp=rrp,
release=release,
category=category,
universe=universe,
authors=authors,
publisher = publisher
)
)
I think I need to add some sub-searching (at the moment release and publisher have the same criteria, for example), but I don't know how to word it to search for it (I've tried, but ended up with generic tutorials that don't cover it). Anything pointing me in the right direction is appreciated!
Oh, and I didn't include ' ' spaces because whenever I used one Scrapy immediately failed to find.
Scrapy doesn't render JS, try to disable javascript in your browser and refresh the page, the HTML structure is different for site version without JS.
you should rewrite your selectors with a new HTML structure. Try to use XPATH instead of CSS it's much flexible.
UPD
The easiest way to scrape this website makes a request to https://forbiddenplanet.com/api/products/listing/?q=mortal%20realms&sort=release-date
The response is a JSON object with all necessary data. You may transform the "results" field (or the whole JSON object) to a python dictionary and get all fields with dictionary methods.
A code draft that works and shows the idea.
import scrapy
import json
def get_tags(tags: list):
parsed_tags = []
if tags:
for tag in tags:
parsed_tags.append(tag.get('name'))
return parsed_tags
return None
class ForbiddenplanetSpider(scrapy.Spider):
name = 'forbiddenplanet'
allowed_domains = ['forbiddenplanet.com']
start_urls = ['https://forbiddenplanet.com/api/products/listing/?q=mortal%20realms&sort=release-date']
def parse(self, response):
response_dict = json.loads(response.body)
items = response_dict.get('results')
for item in items:
yield {
'name': item.get('title'),
'price': item.get('site_price'),
'rrp': item.get('rrp'),
'release': item.get('release_date'),
'category': get_tags(item.get('derived_tags').get('type')),
'universe': get_tags(item.get('derived_tags').get('universe')),
'authors': get_tags(item.get('derived_tags').get('author')),
'publisher': get_tags(item.get('derived_tags').get('publisher')),
}
next_page = response_dict.get('next')
if next_page:
yield scrapy.Request(
url=next_page,
callback=self.parse
)

Where to add request.meta on my script to crawl once

I downloaded scrapy-crawl-once and I am trying to run it in my program. I want to scrape each book's url from the first page of http://books.toscrape.com/ and then scrape the title of the book from that url. I know I can scrape each book title from the first page, but as practice for scrapy-crawl-once, I wanted to do it this way. I already added the middlewares and need to know where to add request.meta. From doing some research, there isn't much codes out there for some example guidance so was hoping someone can help here. I learned the basics of python two weeks ago so struggling right now. I tried this, but the results hasn't changed. Can someone help me out please. I added [:2] so that if I change it to [:3], I can show myself that it works.
def parse(self, response):
all_the_books = response.xpath("//article[#class='product_pod']")
for div in all_the_books[:2]:
book_link = 'http://books.toscrape.com/' + div.xpath(".//h3/a/#href").get()
request = scrapy.Request(book_link, self.parse_book)
request.meta['book_link'] = book_link
yield request
def parse_book(self, response):
name = response.xpath("//div[#class='col-sm-6 product_main']/h1/text()").get()
yield {
'name': name,
}
Its docs says
To avoid crawling a particular page multiple times set
request.meta['crawl_once'] = True
so you need to do
def parse(self, response):
all_the_books = response.xpath("//article[#class='product_pod']")
for div in all_the_books[:2]:
book_link = 'http://books.toscrape.com/' + div.xpath(".//h3/a/#href").get()
request = scrapy.Request(book_link, self.parse_book)
request.meta['crawl_once'] = True
yield request
And it will not crawl that link again

How to: Get a Python Scrapy to run a simple xpath retrieval

I'm very new to python and am trying to build a script that will eventually extract page titles and s from specified URLs to a .csv in the format I specify.
I have tried managed to get the spider to work in CMD using :
response.xpath("/html/head/title/text()").get()
So the xpath must be right.
Unfortunately when I run the the file my spider is in it never seems to work properly. I think the issue is in the final block of code, unfortunately all the guides I follow seem to use CSS. I feel more comfortable with xpath because you can simply copy,paste it from Dev Tools.
import scrapy
class PageSpider(scrapy.Spider):
name = "dorothy"
start_urls = [
"http://www.example.com",
"http://www.example.com/blog"]
def parse(self, response):
for title in response.xpath("/html/head/title/text()"):
yield {
"title": sel.xpath("Title a::text").extract_first()
}
I expected when that to give me the page title of the above URLs.
First of all, your second url on self.start_urls is invalid and returning 404, so you will end up with only one title extracted.
Second, you need to read more about selectors, you extracted the title on your test on shell but got confused when using it on your spider.
Scrapy will call the parse method for each url on self.start_urls, so you don't need to iterate trough titles, you only have one per page.
You also can access the <title> tag directly using // at the beginning of your xpath expression, see this text copied from W3Schools :
/ Selects from the root node
// Selects nodes in the document from the current node that match the selection no matter where they are
Here is the fixed code:
import scrapy
class PageSpider(scrapy.Spider):
name = "dorothy"
start_urls = [
"http://www.example.com"
]
def parse(self, response):
yield {
"title": response.xpath('//title/text()').extract_first()
}

Capture Google Search Term and ResultStats with Scrapy

I have built a very simple scraper using Scrapy. For the output table, I would like to show the Google News search term as well as the Google resultstats value.
The information I would like to capture is showing in the source of the Google page as
<input class="gsfi" value="Elon Musk">
and
<div id="resultStats">About 52,300 results</div>
I have already tried to include both through ('input.value::text') and ('id.resultstats::text'), which did not work, however. Does anyone have an idea how to solve this situation?
import scrapy
class QuotesSpider(scrapy.Spider):
name = 'quotes'
allowed_domains = ['google.com']
start_urls = ['https://www.google.com/search?q=elon+musk&source=lnt&tbs=cdr%3A1%2Ccd_min%3A1%2F1%2F2015%2Ccd_max%3A12%2F31%2F2015&tbm=nws']
def parse(self, response):
for quote in response.css('div.quote'):
item = {
'search_title': quote.css('input.value::text').extract(),
'results': quote.css('id.resultstats::text').extract(),
}
yield item
The pages renders differently when you access it with Scrapy.
The search field becomes:
response.css('input#sbhost::attr(value)').get()
The results count is:
response.css('#resultStats::text').get()
Also, there is no quote class on that page.
You can test this in the scrapy shell:
scrapy shell -s ROBOTSTXT_OBEY=False "https://www.google.com/search?q=elon+musk&source=lnt&tbs=cdr%3A1%2Ccd_min%3A1%2F1%2F2015%2Ccd_max%3A12%2F31%2F2015&tbm=nws"
And then run those 2 commands.
[EDIT]
If your goal is to get one item for each URL, then you can do this:
def parse(self, response):
item = {
'search_title': response.css('input#sbhost::attr(value)').get(),
'results': response.css('#resultStats::text').get(),
}
yield item
If your goal is to extract every result on the page, then you need something different.

Categories