I have a problem getting javascript content into HTML to use it for scripting. I used multiple methods as phantomjs or python QT library and they all get most of the content in nicely but the problem is that there are javascript buttons inside the page like this:
Pls see screenshot here
Now when I load this page from a script these buttons won't default to any value so I am getting back 0 for all SELL/NEUTRAL/BUY values below. Is there a way to set these values when you load the page from a script?
Example page with all the values is: https://www.tradingview.com/symbols/NEBLBTC/technicals/
Any help would be greatly appreciated.
If you are trying to achieve this with scrapy or with derivation of cURL or urrlib I am afraid that you can't do this. Python has another external packages such selenium that allow you to interact with the javascript of the page, but the problem with selenium is too slow, if you want something similar to scrapy you could check how the site works (as i can see it works through ajax or websockets) and fetch the info that you want through urllib, like you would do with an API.
Please let me know if you understand me or i misunderstood your question
I used seleneum which was perfect for this job, it is indeed slow but fits my purpose. I also used the seleneum firefox plugin to generate the python script as it was very challenging to find where exactly in the code as the button I had to press.
Related
I am trying to scrape a web site using python and beautiful soup. I encountered that in some sites, the image links although seen on the browser is cannot be seen in the source code. However on using Chrome Inspect or Fiddler, we can see the the corresponding codes.
What I see in the source code is:
<div id="cntnt"></div>
But on Chrome Inspect, I can see a whole bunch of HTML\CSS code generated within this div class. Is there a way to load the generated content also within python? I am using the regular urllib in python and I am able to get the source but without the generated part.
I am not a web developer hence I am not able to express the behaviour in better terms. Please feel free to clarify if my question seems vague !
You need JavaScript Engine to parse and run JavaScript code inside the page.
There are a bunch of headless browsers that can help you
http://code.google.com/p/spynner/
http://phantomjs.org/
http://zombie.labnotes.org/
http://github.com/ryanpetrello/python-zombie
http://jeanphix.me/Ghost.py/
http://webscraping.com/blog/Scraping-JavaScript-webpages-with-webkit/
The Content of the website may be generated after load via javascript, In order to obtain the generated script via python refer to this answer
A regular scraper gets just the HTML document. To get any content generated by JavaScript logic, you rather need a Headless browser that would also generate the DOM, load and run the scripts like a regular browser would. The Wikipedia article and some other pages on the Net have lists of those and their capabilities.
Keep in mind when choosing that some previously major products of those are abandoned now.
TRY THIS FIRST!
Perhaps the data technically could be in the javascript itself and all this javascript engine business is needed. (Some GREAT links here!)
But from experience, my first guess is that the JS is pulling the data in via an ajax request. If you can get your program simulate that, you'll probably get everything you need handed right to you without any tedious parsing/executing/scraping involved!
It will take a little detective work though. I suggest turning on your network traffic logger (such as "Web Developer Toolbar" in Firefox) and then visiting the site. Focus your attention attention on any/all XmlHTTPRequests. The data you need should be found somewhere in one of these responses, probably in the middle of some JSON text.
Now, see if you can re-create that request and get the data directly. (NOTE: You may have to set the User-Agent of your request so the server thinks you're a "real" web browser.)
i need to write a python script , the script should access a webpage , which has a "upload" button , normally when you upload a photo with that button a new page opens . and once that page opens i need to look for a string there
so the script should upload there a photo , which i provide to the script and then check the output page for a string
i have no background in that sort of coding (i know basic python ) .
can i get a reference or some pointers on what reading should i do to perform that task? thank you very much
While this question is not specific enough to give you a good answer, I can make a couple of suggestions. I would look into using a library for sending requests to pages, such as requests. I would also look into libraries for parsing html, such as Beautiful Soup. Essentially you will need to use requests to get the page's html, and then you'll need to parse that html using Beautiful Soup to find what you're looking for on the page.
You should do some reading about these libraries and/or other similar ones and try to get a better understanding of your problem. Afterward, come back to Stack Overflow once you have more specific questions or problems you've run into.
I have a task where in I need to submit a form to website but they dont provide any API. I am currently using webdriver and faced many problems because of asynchronous nature between my code and browser. I am looking for a light weight reliable library/tool using with I can do all the tasks a user do with a browser.
Casperjs is one of the option which can do my job but I am more familiar with python and scrapy has larger developer community compare to casperjs.
Navigation utility without browser, light weight and fail-proof is one of the related question.
in short answer is No.
scrapy can't render java script but browser can.
you can use Selenium.
if you are sure to use scrapy and there is javascript that you need to run you can use
scrapy with selenium
scrapy with gtk/webkit/jswebkit
scrapy with webdrivers
If you like CasperJS but want to stick with Python, you should have a look at Ghost.py
I'm a little new to web crawlers and such, though I've been programming for a year already. So please bear with me as I try to explain my problem here.
I'm parsing info from Yahoo! News, and I've managed to get most of what I want, but there's a little portion that has stumped me.
For example: http://news.yahoo.com/record-nm-blaze-test-forest-management-225730172.html
I want to get the numbers beside the thumbs up and thumbs down icons in the comments. When I use "Inspect Element" in my Chrome browser, I can clearly see the things that I have to look for - namely, an em tag under the div class 'ugccmt-rate'. However, I'm not able to find this in my python program. In trying to track down the root of the problem, I clicked to view source of the page, and it seems that this tag is not there. Do you guys know how I should approach this problem? Does this have something to do with the javascript on the page that displays the info only after it runs? I'd appreciate some pointers in the right direction.
Thanks.
The page is being generated via JavaScript.
Check if there is a mobile version of the website first. If not, check for any APIs or RSS/Atom feeds. If there's nothing else, you'll either have to manually figure out what the JavaScript is loading and from where, or use Selenium to automate a browser that renders the JavaScript for you for parsing.
Using the Web Console in Firefox you can pretty easily see what requests the page is actually making as it runs its scripts, and figure out what URI returns the data you want. Then you can request that URI directly in your Python script and tease the data out of it. It is probably in a format that Python already has a library to parse, such as JSON.
Yahoo! may have some stuff on their server side to try to prevent you from accessing these data files in a script, such as checking the browser (user-agent header), cookies, or referrer. These can all be faked with enough perseverance, but you should take their existence as a sign that you should tread lightly. (They may also limit the number of requests you can make in a given time period, which is impossible to get around.)
How can I execute links2 to open a web page and locate and click a text link with Python?
Is pexpect able to do it? Any examples are appreciated.
Not sure why you want to do this. If you want to grab the web link and process the page content, urllib2 together with an HTML parser (BeautifulSoup for example) may be just fine.
If you do want to simulate moust clicks, you may want to use AutoPy.
Why do you want to use links2? I don't see how you could benefit from that. It is probably better to approach your problem in a different way, like with mechanize or maybe even twill.
Please provide a description of your overall problem instead of that specific question
if you want javascript support use selenium rc with whatever language you are comfortable with