I am trying to scrape news articles from yahoo finance and to do so, i want to use their sitemap page https://finance.yahoo.com/sitemap/
The problem i have is that after following a link https://finance.yahoo.com/sitemap/2015_04_02 for example scrapy does not process the whole page - only the header. So i cannot access the links to the different articles.
Is there some internal requests that i have to sent to the page ?
I still get the whole page by deactivating javascript in my browser and i use scrapy 1.6
Thanks.
Some sites take defensive measures against robots scraping their websites. If they detect that you are non-human, they may not serve the entire page. But more than likely what is happening is there is a bunch of client-side rendering that happens when you view the page in a web browser, which is not being executed when you request that same page in scrapy.
Yahoo! Finance has a API. Using that will probably get you more reliable results.
Related
I am scraping job posting data from a website using BeautifulSoup. I have working code that does what I need, but it only scrapes the first page of job postings. I am having trouble figuring out how to iteratively update the url to scrape each page. I am new to Python and have looked at a few different solutions to similar questions, but have not figured out how to apply them to my particular url. I think I need to iteratively update the url or somehow click the next button and then loop my existing code through each page. I appreciate any solutions.
url: https://jobs.utcaerospacesystems.com/search-jobs
First, BeautifulSoup doesn't have anything to do with GETing web pages - you get the webpage yourself, then feed it to bs4 for processing.
The problem with the page you linked is that it's javascript - it only renders correctly in a browser (or any other javascript VM).
#Fabricator is on the right track - you'll need to watch the developer console and see what the ajax requests the js is sending to the server. In this case, also take a look at the query string params, which include a param called CurrentPage - that's probably the one you want to focus on.
I want to download supreme court cases. Below is the code, I am trying:
page = requests.get('http://judis.nic.in/supremecourt/Chrseq.aspx').text
I am getting below contents in page:
u'<html><p><hr></hr></p><b><center>The Problem may be due to 500 Server Error/404 Page Not Found.Please contact your system administrator.</center></b><p><hr></hr></p></html><!--0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234-->\r\n'
Is the site not scrapable or do I need to use some other method?
I checked this answer: How to scrape aspx pages with python , but the solution is in selenium.
Is it possible to do it in python and Beautiful soup?
The reason is you are hitting a url which may be no longer served by the server. I am able to get data from all pages. I checked response from scrapy shell as
scrapy shell "http://judis.nic.in/supremecourt/chejudis.asp"
and using xpath you can retrieve whatever data you want from same page.
I'm not able to open the website though my browser. I'm getting the same response from my browser. Maybe that's why you're getting that response back.
I want to crawl an ASP.NET website but the urls are all the same how can I crawl specific pages using python?
here is the website I want to crawl:
http://www.fveconstruction.ch/index.htm
(I am using beautifulsoup, urllib and python 3)
What information should I get to distinguish a page from the other?
If the target website is just a single page application, it can't be crawled. As a workaround you can see the requests (GET, POST etc) that actually go when you manually navigate through the website and ask the crawler to use those. Or, teach your crawler to execute javascript at least what's on the target website.
It's the website who need to change to be easily crawlable, they need to provide a reasonable non-AJAX version of every page that needs to be indexed, or links to a page that needs to be indexed. Or use something like what pushState does in angularJs.
I want to use windmill or selenium to simulate a browser that visits a website, scrapes the content and after analyzing the content goes on with some action depending of the analysis.
As an example. The browser visits a website, where we can find, say 50 links. While the browser is still running, a python script for example can analyze the found links and decides on what link the browser should click.
My big question is with how many http Requests this can be done using windmill or selenium. I mean do these two programs can simulate visiting a website in a browser and scrape the content with just one http request, or would they use another internal request to the website for getting the links, while the browser is still running?
Thx alot!
Selenium uses the browser but number of HTTP request is not one. There will be multiple HTTP request to the server for JS, CSS and Images (if any) mentioned in the HTML document.
If you want to scrape the page with single HTTP request, you need to use scrapers which only gets what is present in the HTML source. If you are using Python, check out BeautifulSoup.
Hi i have successfully scraped all the pages of few shopping websites by using Python and Regular Expression.
But now i am in trouble to scrape all the pages of a particular website where next page follow up link is not present in current page like this one here http://www.jabong.com/men/clothing/mens-jeans/
This website is loading the next pages data in same page dynamically by Ajax calls. So while scraping i am only able to scrape the data of First page only. But I need to scrape all the items present in all pages of that website.
I am not getting a way to get the source code of all the pages of these type of websites where next page's follow up link is not available in Current page. Please help me through this.
Looks like the site is using AJAX requests to get more search results as the user scrolls down. The initial set of search results can be found in the main request:
http://www.jabong.com/men/clothing/mens-jeans/
As the user scrolls down the page detects when they reach the end of the current set of results, and loads the next set, as needed:
http://www.jabong.com/men/clothing/mens-jeans/?page=2
One approach would be to simply keep requesting subsequent pages till you find a page with no results.
By the way, I was able to determine this by using the proxy tool in screen-scraper. You could also use a tool like Charles or HttpFox. They key is to browse the site and watch what HTTP requests get made so that you can mimic them in your code.