Crawl a webpage which is generated by Javascript - python

I want to crawl the data from this website
I only need the text "Pictograph - A spoon 勺 with something 一 in it"
I checked Network -> Doc and I think the information is hidden here.
Because I found there's a line is
i.length > 0 && (r += '<span>» Formation: <\/span>' + i + _Eb)
And I think this page generates part of the page that we can see from the link.
However, I don't know what is the code? It has html, but it also contains so many function().
Update
If the code is Javascript, I would like to know how can I crawl the website not using Selenium?
Thanks!

This page use JavaScript to add this element. Using Selenium I can get HTML after adding this element and then I can search text in HTML. This HTML has strange construction - all text is in tag so this part has no special tag to find it. But it is last text in this tag and it starts with "Formation:" so I use BeautifulSoup to ge all text with all subtags using get_text() and then I can use split('Formation:') to get text after this element.
import selenium.webdriver
from bs4 import BeautifulSoup as BS
driver = selenium.webdriver.Firefox()
driver.get('https://www.archchinese.com/chinese_english_dictionary.html?find=%E4%B8%8E')
soup = BS(driver.page_source)
text = soup.find('div', {'id': "charDef"}).get_text()
text = text.split('Formation:')[-1]
print(text.strip())
Maybe Selenium works slower but it was faster to create solution.
If I could find url used by JavaScript to load data then I would use it without Selenium but I didn't see these information in XHR responses. There was few responses compressed (probably gzip) or encoded and maybe there was this text but I didn't try to uncompress/decode it.

Related

Reading information from a page without Web Locators , Selenium , Python

xpath_id = '/html/body'
conf_code = driver.find_element(By.XPATH, (xpath_id))
code_list = []
for c in range(len(conf_code)):
code_list.append(conf_code[c].text)
as seen above i chose the xpath locator, but i can't locate the text, that is because this particular webpage is completly blank as only as text in the «body»
the html of the page is bellow:
«html» , «head», «body» 'text that i want to read and save' «body», «/html»
how to read this text and then store it in a variable
Your question is not clear enough.
Anyway, in case there are multiple elements containing texts on that page you can use something like this:
xpath_id = '/html/body/*'
conf_code = driver.find_elements(By.XPATH, (xpath_id))
code_list = []
for c in conf_code:
code_list.append(c.text)
Don't forget to add some delay to make the page completely loaded before you getting all these elements from there
If you're really just grabbing a website that is so simple, you don't need selenium. Grab the website with requests and split the result on the body tags to get the text. Much simpler code and avoids the overhead of the selenium driver.
import requests
url = "http://your-url-here.com"
content = requests.get(url).text
the_string_youre_looking_for = content.split('<body>')[1].split('</body>')[0]
Is this what you're looking for? If not, maybe try and reword your question, because it's a bit hard to understand what you want your code to do and in what context.
Resolved using
print(driver.page_source)
I got full HTML content, and due to its simplicity it was easy to extract to required content withing the <body> TAG

List links of xls files using Beautifulsoup

I'm trying to retrieve a list of downloadable xls files on a website.
I'm a bit reluctant to provide full links to the website in question.
Hopefully I'm able to provide all necessary details all the same.
If this is useless, please let me know.
Download .xls files from a webpage using Python and BeautifulSoup is a very similar question, but the details below will show that the solution most likely will have to be different since the links on that particular site are tagged with a href anchor:
And the ones I'm trying to get are not tagged the same way.
On the webpage, the files that are available for downloading are listed like this:
A simple mousehover gives these further details:
I'm following the setup here with a few changes to produce the snippet below that provides a list of some links, but not to any of the xls files:
from bs4 import BeautifulSoup
import urllib
import re
def getLinks(url):
with urllib.request.urlopen(url) as response:
html = response.read()
soup = BeautifulSoup(html, "lxml")
links = []
for link in soup.findAll('a', attrs={'href': re.compile("^http://")}):
links.append(link.get('href'))
return links
links1 = getLinks("https://SOMEWEBSITE")
A further inspection using ctrl+shift+I in Google Chrome reveals that those particular links do not have a href anchor tag, but rather a ng-href anchor tag:
So I tried changing that in the snippet above, but with no success.
And I've tried different combinations with e.compile("^https://"), attrs={'ng-href' and links.append(link.get('ng-href')), but still with no success.
So I'm hoping someone has a better suggestion!
EDIT - Further details
It seems it's a bit problematic to read these links directly.
When I use ctrl+shift+I and the Select an element in the page to inspect it Ctrl+Shift+C, this is what I can see when I hover over one of the links listed above:
And what I'm looking to extract here is the information associated with the ng-href tag. But If I right-click the page and select Show Source, the same tag only appears once along with som metadata(?):
And I guess this is why my rather basic approach is failing in the first place.
I'm hoping this makes sense to some of you.
Update:
using selenium
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
driver = webdriver.Chrome()
driver.get('http://.....')
# wait max 15 second until the links appear
xls_links = WebDriverWait(driver, 15).until(lambda d: d.find_elements_by_xpath('//a[contains(#ng-href, ".xls")]'))
# Or
# xls_links = WebDriverWait(driver, 15).until(lambda d: d.find_elements_by_xpath('//a[contains(#href, ".xls")]'))
links = []
for link in xls_links:
url = "https://SOMEWEBSITE" + link.get_attribute('ng-href')
print(url)
links.append(url)
Assume ng-href is not dynamically generated, from your last image I see that the URL is not starts with https:// but the slash / you can try with regex URL contains .xls
for link in soup.findAll('a', attrs={'ng-href': re.compile(r"\.xls")}):
xls_link = "https://SOMEWEBSITE" + link['ng-href']
print(xls_link)
links.append(xls_link)
My guess is that the data you are trying to crawl is created dynamically: ng-href is one of AngularJs's constructs. You could try using Google Chrome's Network inspection as you already did (ctrl+shift+I) and see if you can find the url that is queried (open the network tab and reload the page). The query should typically return a JSON with the links to the xls-files.
There is a thread about a similar problem here. Perhaps that helps you: Unable to crawl some href in a webpage using python and beautifulsoup

Website scraping with python3 & beautifulsoup 4

I'm starting to make progress on a website scraper, but I've run into two snags. Here is the code first:
import requests
from bs4 import BeautifulSoup
r=requests.get("http://www.nytimes.com")
soup=BeautifulSoup(r.text)
headlines=soup.find_all(class_="story-heading")
for headline in headlines:
print (headline)
Questions
Why do you a have to use find_all(class_= blahblahblah)
Instead of just find_all(blahblahblah)? I realize that the story-heading is a class of its own, but can't I just search all the HTML using find_all and get the same results? The notes for BeautifulSoup show find_all.a returning all the anchor tags in an HTML document, why won't find_all("story-heading") do the same?
Is it because if I try and do that, it will just find all the instances of "story-heading" within the HTML and return those? I am trying to get python to return everything in that tag. That's my best guess.
Why do I get all this extra junk code? Should my requests to find all just show me everything within the story-header tag? I'm getting a lot more text than what I am just trying to specify.
Beautiful Soup allows you use CSS Selectors. Look in the doc for "CSS selector"
You can find all elements with class "story-heading" like so:
soup.find_all(".story-heading")
If instead it's you're looking for id's just do
soup.find_all("#id-name")

Web scraping using Beautiful Soup separating HTML and Javascript and CSS

I am trying to scrape a web page which comprises of Javascript, CSS and HTML. Now this web page also has some text. When I open the web page using the file handler on running the soup.get_text() command I would only like to view the HTML portion and nothing else. Is it possible to do this?
The current source code is:
from bs4 import BeautifulSoup
soup=BeautifulSoup(open("/home/Desktop/try.html"))
print soup.get_text()
What do I change to get only the HTML portion in a web page and nothing else?
Try to remove the contents of the tags that hold the unwanted text (or style attributes).
Here is some code (tested in basic cases)
from bs4 import BeautifulSoup
soup = BeautifulSoup(open("/home/Desktop/try.html"))
# Clear every script tag
for tag in soup.find_all('script'):
tag.clear()
# Clear every style tag
for tag in soup.find_all('style'):
tag.clear()
# Remove style attributes (if needed)
for tag in soup.find_all(style=True):
del tag['style']
print soup.get_text()
It depends on what you mean by get. Dmralev's answer will clear the other tags, which will work fine. However, <HTML> is a tag within the soup, so
print soup.html.get_text()
should also work, with fewer lines, assuming portion means that the HTML is seperate from the rest of the code (ie the other code is not within <HTML> tags).

How to find links with all uppercase text using Python (without a 3rd party parser)?

I am using BeautifulSoup in a simple function to extract links that have all uppercase text:
def findAllCapsUrls(page_contents):
""" given HTML, returns a list of URLs that have ALL CAPS text
"""
soup = BeautifulSoup.BeautifulSoup(page_contents)
all_urls = node_with_links.findAll(name='a')
# if the text for the link is ALL CAPS then add the link to good_urls
good_urls = []
for url in all_urls:
text = url.find(text=True)
if text.upper() == text:
good_urls.append(url['href'])
return good_urls
Works well most of the time, but a handful of pages will not parse correctly in BeautifulSoup (or lxml, which I also tried) due to malformed HTML on the page, resulting in an object with no (or only some) links in it. A "handful" might sound like not-a-big-deal, but this function is being used in a crawler so there could be hundreds of pages that the crawler will never find...
How can the above function be refactored to not use a parser like BeautifulSoup? I've searched around for how to do this using regex, but all the answers say "use BeautifulSoup." Alternatively, I started looking at how to "fix" the malformed HTML so that is parses, but I don't think that is the best route...
What is an alternative solution, using re or something else, that can do the same as the function above?
If the html pages are malformed, there is not a lot of solutions that can really help you. BeautifulSoup or other parsing library are the way to go to parse html files.
If you want to avoir the library path, you could use a regexp to match all your links see regular-expression-to-extract-url-from-an-html-link using a range of [A-Z]
When I need to parse a really broken html and speed is not the most important factor I automate a browser with selenium & webdriver.
This is the most resistant way of html parsing I know.
Check this tutorial it shows how to extract google suggestion using webdriver (the code is in java but it can be changed to python).
I ended up with a combination of regex and BeautifulSoup:
def findAllCapsUrls2(page_contents):
""" returns a list of URLs that have ALL CAPS text, given
the HTML from a page. Uses a combo of RE and BeautifulSoup
to handle malformed pages.
"""
# get all anchors on page using regex
p = r'<a\s+href\s*=\s*"([^"]*)"[^>]*>(.*?(?=</a>))</a>'
re_urls = re.compile(p, re.DOTALL)
all_a = re_urls.findall(page_contents)
# if the text for the anchor is ALL CAPS then add the link to good_urls
good_urls = []
for a in all_a:
href = a[0]
a_content = a[1]
a_soup = BeautifulSoup.BeautifulSoup(a_content)
text = ''.join([s.strip() for s in a_soup.findAll(text=True) if s])
if text and text.upper() == text:
good_urls.append(href)
return good_urls
This is working for my use cases so far, but I wouldn't guarantee it to work on all pages. Also, I only use this function if the original one fails.

Categories