enter image description hereI want to scrape content from website: https://www.fireant.vn/App#/company-data/ACB.
As far as i know, the content of tablea i want to scrape dynamically rendering data from Angularjs. They use ng-repeat to pass all value like time, volumn, price into the table.
</tr><!-- end ngRepeat: quote in intradayQuotes | orderBy: '-Date' --><tr ng-repeat="quote in intradayQuotes | orderBy: '-Date'" class="ng-scope">
This code from is as far as i can get as i really don't know what kind of object the table is
driver.get('https://www.fireant.vn/App#/company-data/ACB')
driver.set_window_position(0, 0)
driver.set_window_size(100000, 200000)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
soup = BeautifulSoup(driver.page_source, 'lxml')
Any instruction on the matter would be much helpful.
Data from table i want to get
print driver.page_source
You'll want to do some digging around on the site to find the CSS selectors (or xpath if you're more comfortable with xml) and use that to get the elements / text that you're interested in.
Instead of using beautiful soup, for Angular (and other Javascript rendered content) you can just grab the data right from webdriver.
for instance:
# driver.find_element_by_css_selector('<SELECTOR FOR THE ELEMENTS YOU WANT>')
prices = driver.find_elements_by_class_name('.price').text
# Returns an array of text of all elements that have the .price class.
Given the screenshot you posted, it seems that Price and Time have identical HTML element attributes -- however, you can use XPath indexing in this case to retrieve the item you want.
To retrieve price:
prices = driver.find_elements_by_xpath("//tbody/tr[2]/td[2]/b[#class='ng-binding']")
Based on the screenshot, it looks like the 'Price' text is in the 2nd tr element under tbody, and 'Price' cell is the 2nd td element under the tr. While I do not normally recommend this type of syntax, your scenario is a special case where the HTML is all nearly identical.
Related
Webscraping a table into an Excel-file. Its a "Dynamic" table per 10 rows.
All the data is placed into Excel correctly, But having issues with the HREF-data.
The issue i am facing is that some rows dont have a HREF. I am using the following Xpath:
map = driver.find_elements(By.XPATH,'//*[#id="table_1"]/tbody//td[12]/a')
To get the HREF:
.get_attribute("href")[30:].split(",%20")[0]
.get_attribute("href")[30:].split(",%20")[1]
Via above Xpath is can find every HREF, but in case of NO HREF in the row, the following HREF-data is placed into the row where NO HREF-data should be.
Tried the below (without the "/a") but it returns nothing.
map_test = driver.find_elements(By.XPATH, '//*[#id="table_1"]/tbody//td[12]')
When below code is used, it returns the text content which is not what I need, but keeps the data where is should be.
.get_attribute("textContent")
Any idea how i can find the HREFs and keep the data in the rows where it should be?
Scrapping links should be a simple feat, usually just grabbing the src value of the a tag.
I recently came across this website (https://sunteccity.com.sg/promotions) where the href value of a tags of each item cannot be found, but the redirection still works. I'm trying to figure out a way to grab the items and their corresponding links. My typical python selenium code looks something as such
all_items = bot.find_elements_by_class_name('thumb-img')
for promo in all_items:
a = promo.find_elements_by_tag_name("a")
print("a[0]: ", a[0].get_attribute("href"))
However, I can't seem to retrieve any href, onclick attributes, and I'm wondering if this is even possible. I noticed that I couldn't do a right-click, open link in new tab as well.
Are there any ways around getting the links of all these items?
Edit: Are there any ways to retrieve all the links of the items on the pages?
i.e.
https://sunteccity.com.sg/promotions/724
https://sunteccity.com.sg/promotions/731
https://sunteccity.com.sg/promotions/751
https://sunteccity.com.sg/promotions/752
https://sunteccity.com.sg/promotions/754
https://sunteccity.com.sg/promotions/280
...
Edit:
Adding an image of one such anchor tag for better clarity:
By reverse-engineering the Javascript that takes you to the promotions pages (seen in https://sunteccity.com.sg/_nuxt/d4b648f.js) that gives you a way to get all the links, which are based on the HappeningID. You can verify by running this in the JS console, which gives you the first promotion:
window.__NUXT__.state.Promotion.promotions[0].HappeningID
Based on that, you can create a Python loop to get all the promotions:
items = driver.execute_script("return window.__NUXT__.state.Promotion;")
for item in items["promotions"]:
base = "https://sunteccity.com.sg/promotions/"
happening_id = str(item["HappeningID"])
print(base + happening_id)
That generated the following output:
https://sunteccity.com.sg/promotions/724
https://sunteccity.com.sg/promotions/731
https://sunteccity.com.sg/promotions/751
https://sunteccity.com.sg/promotions/752
https://sunteccity.com.sg/promotions/754
https://sunteccity.com.sg/promotions/280
https://sunteccity.com.sg/promotions/764
https://sunteccity.com.sg/promotions/766
https://sunteccity.com.sg/promotions/762
https://sunteccity.com.sg/promotions/767
https://sunteccity.com.sg/promotions/732
https://sunteccity.com.sg/promotions/733
https://sunteccity.com.sg/promotions/735
https://sunteccity.com.sg/promotions/736
https://sunteccity.com.sg/promotions/737
https://sunteccity.com.sg/promotions/738
https://sunteccity.com.sg/promotions/739
https://sunteccity.com.sg/promotions/740
https://sunteccity.com.sg/promotions/741
https://sunteccity.com.sg/promotions/742
https://sunteccity.com.sg/promotions/743
https://sunteccity.com.sg/promotions/744
https://sunteccity.com.sg/promotions/745
https://sunteccity.com.sg/promotions/746
https://sunteccity.com.sg/promotions/747
https://sunteccity.com.sg/promotions/748
https://sunteccity.com.sg/promotions/749
https://sunteccity.com.sg/promotions/750
https://sunteccity.com.sg/promotions/753
https://sunteccity.com.sg/promotions/755
https://sunteccity.com.sg/promotions/756
https://sunteccity.com.sg/promotions/757
https://sunteccity.com.sg/promotions/758
https://sunteccity.com.sg/promotions/759
https://sunteccity.com.sg/promotions/760
https://sunteccity.com.sg/promotions/761
https://sunteccity.com.sg/promotions/763
https://sunteccity.com.sg/promotions/765
https://sunteccity.com.sg/promotions/730
https://sunteccity.com.sg/promotions/734
https://sunteccity.com.sg/promotions/623
You are using a wrong locator. It brings you a lot of irrelevant elements.
Instead of find_elements_by_class_name('thumb-img') please try find_elements_by_css_selector('.collections-page .thumb-img') so your code will be
all_items = bot.find_elements_by_css_selector('.collections-page .thumb-img')
for promo in all_items:
a = promo.find_elements_by_tag_name("a")
print("a[0]: ", a[0].get_attribute("href"))
You can also get the desired links directly by .collections-page .thumb-img a locator so that your code could be:
links = bot.find_elements_by_css_selector('.collections-page .thumb-img a')
for link in links:
print(link.get_attribute("href"))
I have a problem while trying to access some values on the website during the process of web scraping the data. The problem is that the text I want to extract is in the class which contains several texts separated by tags (these body tags also have texts which are also important for me).
So firstly, I tried to look for the tag with the text I needed ('Category' in this case) and then extract the exact category from the text below this body tag assignment. I could use precise XPath but here it is not the case because other pages I need to web scrape contain a different amount of rows in this sidebar so the locations, as well as XPaths, are different.
The expected output is 'utility' - the category in the sidebar.
The website and the text I need to extract look like that (look right at the sidebar containing 'Category':
The element looks like that:
And the code I tried:
driver = webdriver.Safari()
driver.get('https://www.statsforsharks.com/entry/MC_Squares')
element = driver.find_elements_by_xpath("//b[contains(text(), 'Category')]/following-sibling")
for value in element:
print(value.text)
driver.close()
the link to the page with the data is https://www.statsforsharks.com/entry/MC_Squares.
Thank you!
You might be better off using regex here, as the whole text comes under the 'company-sidebar-body' class, where only some text is between b tags and some are not.
So, you can the text of the class first:
sidebartext = driver.find_element_by_class_name("company-sidebar-body").text
That will give you the following:
"EOY Proj Sales: $1,000,000\r\nSales Prev Year: $200,000\r\nCategory: Utility\r\nAsking Deal\r\nEquity: 10%\r\nAmount: $300,000\r\nValue: $3,000,000\r\nEquity Deal\r\nSharks: Kevin O'Leary\r\nEquity: 25%\r\nAmount: $300,000\r\nValue: $1,200,000\r\nBite: -$1,800,000"
You can then use regex to target the category:
import re
c = re.search("Category:\s\w+", sidebartext).group()
print(c)
c will result in 'Category: Utility' which you can then work with. This will also work if the value of the category ('Utility') is different on other pages.
There are easier ways when it's a MediaWiki website. You could, for instance, access the page data through the API with a JSON request and parse it with a much more limited DOM.
Any particular reason you want to scrape my website?
I am working on a project where I am crawling thousands of websites to extract text data, the end use case is natural language processing.
EDIT * since I am crawling 100's of thousands of websites I cannot tailor a scraping code for each one, which means I cannot search for specific element id's, the solution I am looking for is a general one *
I am aware of solutions such as the .get_text() function from beautiful soup. The issue with this method is that it gets all the text from the website, much of it being irrelevant to the main topic on that particular page. for the most part a website page will be dedicated to a single main topic, however on the sides and top and bottom there may be links or text about other subjects or promotions or other content.
With the .get_text() function it return all the text on the site page in one go. the problem is that it combines it all (the relevant parts with the irrelevant ones. is there another function similar to .get_text() that returns all text but as a list and every list object is a specific section of the text, that way it can be know where new subjects start and end.
As a bonus, is there a way to identify the main body of text on a web page?
Below I have mentioned snippets that you could use to query data in desired way using BeautifulSoup4 and Python3:
import requests
from bs4 import BeautifulSoup
response = requests.get('https://yoursite/page')
soup = BeautifulSoup(response.text, 'html.parser')
# Print the body content in list form
print(soup.body.contents[0])
# Print the first found div on html page
print(soup.find('div'))
# Print the all divs on html page in list form
print(soup.find_all('div'))
# Print the element with 'required_element_id' id
print(soup.find(id='required_element_id'))
# Print the all html elements in list form that matches the selectors
print(soup.select(required_css_selectors))
# Print the attribute value in list form
print(soup.find(id='someid').get("attribute-name"))
# You can also break your one large query into multiple queries
parent = soup.find(id='someid')
# getText() return the text between opening and closing tag
print(parent.select(".some-class")[0].getText())
For your more advance requirement, you can check Scrapy as well. Let me know if you face any challenge in implementing this or if your requirement is something else.
I wish to get text from a html page using XPath.
The particular text is in the td to right of Description: (inside th element) from the url in the source.
In the first call (commented out) I have tried absolute path from XPath taken from Chrome inspector but I get an empty list.
The next call works and gives the heading:
"Description:"
I require a generic XPath query that would take a text heading (like "Description:") and give text value of the td next to it.
url = 'http://datrack.canterbury.nsw.gov.au/cgi/datrack.pl?cmd=download&id=ZiFfLxV6W1xHWBN1UwR5SVVSAV0GXUZUcGFGHhAyTykQAG5CWVcARwM='
page = requests.get(url)
tree = html.fromstring(page.content)
# desc = tree.xpath('//*[#id="documentpreview"]/div[1]/table[1]/tbody/tr[2]/td//text()')
desc = tree.xpath("//text()[contains(., 'Description:')]")
I have tried variations of XPath queries but my knowledge is not deep enough.
Any help would be appreciated.
Use //*[contains(text(), 'Description:')] to find tags whose text contains Description:, and use following-sibling::td to find following siblings which are td tags:
In [180]: tree.xpath("//*[contains(text(), 'Description:')]/following-sibling::td/text()")
Out[180]: ['Convert existing outbuilding into a recreational area with bathroom and kitchenette']