Python Selenium-webdriver: click on all full review buttons when page loaded - python

I am trying to retrieve information (including reviews) about the app from google play store. Some of the reviews are short in length and some reviews are long and has got full review button. When the page is loaded on the browser, I want to execute click command so that it click on all review button of the reviews (if any) and then start extracting information from the page. Here is my code:
baseurl='https://play.google.com/store/apps/details?id=com.zihua.android.mytracks&hl=en&showAllReviews=true'
driver.get(baseurl)
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, "//div[#class='d15Mdf bAhLNe']//div[#class='cQj82c']/button[text()="Full Review"]"))).click()
person_info = driver.find_elements_by_xpath("//div[#class='d15Mdf bAhLNe']")
for person in person_info:
review_response_person = ''
response_date = ''
response_text = ''
name = person.find_element_by_xpath(".//span[#class='X43Kjb']").text
review = person.find_element_by_xpath(".//div[#class='UD7Dzf']/span").text
However, program throws following error on third line of the code (i.e. WebDriverWait(driver,10)):
ElementClickInterceptedException: Message: element click intercepted: Element <button class="LkLjZd ScJHi OzU4dc " jsaction="click:TiglPc" jsname="gxjVle">...</button> is not clickable at point (380, 10). Other element would receive the click: ...
Could anyone guide me how to fix the issue?

It looks as though the full review is inside the span just below the visible trimmed review (jsname="fbQN7e") so you could do something like this
driver.get("https://play.google.com/store/apps/details?id=com.zihua.android.mytracks&hl=en&showAllReviews=true")
reviews = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.XPATH, "//div[#jsname='fk8dgd']"))
)
for review in reviews.find_elements(By.CSS_SELECTOR, "div[jscontroller='H6eOGe']"):
reviewText = review.find_element(By.CSS_SELECTOR, "span[jsname='fbQN7e']")
print(reviewText.get_attribute("innerHTML"))
However, that will likely only return one review, you'll need to scroll the page down to the bottom so that they're all loaded in. There are other answers which give good examples on how to do that. Once added, it will iterate over each full review without the need to click a button to expand.

Related

Python Selenium: Click Instagram next post button

I'm creating an Instagram bot but cannot figure out how to navigate to the next post.
Here is what I tried
#Attempt 1
next_button = driver.find_element_by_class_name('wpO6b ')
next_button.click()
#Attempt 2
_next = driver.find_element_by_class_name('coreSpriteRightPaginationArrow').click()
Neither of two worked and I get a NoSuchElementException or ElementClickInterceptedException . What corrections do I need to make here?
This is the button I'm trying to click(to get to the next post)
I have checked your class name coreSpriteRightPaginationArrow and i couldn't find any element with that exact class name. But I saw the class name partially. So it might help if you try with XPath contains as shown below.
//div[contains(#class,'coreSpriteRight')]
another xpath using class wpO6b. there are 10 elements with same class name so filtered using #aria-label='Next'
//button[#class='wpO6b ']//*[#aria-label='Next']
Try these and let me know if it works.
I have tried below code and it's clicking next button for 10 times
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
if __name__ == '__main__':
driver = webdriver.Chrome('/Users/yosuvaarulanthu/node_modules/chromedriver/lib/chromedriver/chromedriver') # Optional argument, if not specified will search path.
driver.maximize_window()
driver.implicitly_wait(15)
driver.get("https://www.instagram.com/instagram/");
time.sleep(2)
driver.find_element(By.XPATH,"//button[text()='Accept All']").click();
time.sleep(2)
#driver.find_element(By.XPATH,"//button[text()='Log in']").click();
driver.find_element(By.NAME,"username").send_keys('username')
driver.find_element(By.NAME,"password").send_keys('password')
driver.find_element(By.XPATH,"//div[text()='Log In']").click();
driver.find_element(By.XPATH,"//button[text()='Not now']").click();
driver.find_element(By.XPATH,"//button[text()='Not Now']").click();
#it open Instagram page and clicks 1st post and then it will click next post button for the specified range
driver.get("https://www.instagram.com/instagram/");
driver.find_element(By.XPATH,"//div[#class='v1Nh3 kIKUG _bz0w']").click();
for page in range(1,10):
driver.find_element(By.XPATH,"//button[#class='wpO6b ']//*[#aria-label='Next']" ).click();
time.sleep(2)
driver.quit()
As you can see, the next post right arrow button element locator is changing between the first post to other posts next page button.
In case of the first post you should use this locator:
//div[contains(#class,'coreSpriteRight')]
While for all the other posts you should use this locator
//a[contains(#class,'coreSpriteRight')]
The second element //a[contains(#class,'coreSpriteRight')] will also present on the first post page as well, however this element is not clickable there, it is enabled and can be clicked on non-first pages only.
As you can see on the picture below, the wp06b button is inside a lot of divs, in that case you might need to give Selenium that same path of divs to be able to access the button or give it a XPath.
It's not the most optimized but should work fine.
driver.find_element(By.XPATH("(.//*[normalize-space(text()) and normalize-space(.)='© 2022 Instagram from Meta'])[1]/following::*[name()='svg'][2]")).click()
Note that the XPath leads to a svg, so basically we are clicking on the svg element itself, not in the button.

scraping not working - InvalidSelectorException: Message: invalid selector: An invalid or illegal selector was specified

I am trying to scrape the yahoo finance webpage (comment section). I want to click on a button whose class is seen in the picture below:
I want to select the button with the following code, but I am getting an InvalidSelectorException. I do not understand why.
Note that in my code I have replaced the space with . because that's what I usually do, but I have also tried without replace the spaces and in both cases it is not working.
link = 'https://finance.yahoo.com/quote/AMD/community?p=AMD'
path = r"""chromedriver.exe"""
driver = webdriver.Chrome(executable_path=path)
driver.get(link)
driver.find_element_by_class_name('sort-filter-button.O(n):h.O(n):a.Fw(b).M(0).P(0).Ff(i).C(#000).Fz(16px)')
You can check the below
#This page is taking more time to load
sleep(15)
element = driver.find_element_by_xpath("//button[#aria-label='Sort Reactions']")
element.click()
update
element = driver.find_element_by_xpath("//button[contains(#class,'sort-filter-button')]")
element.click()

Elem could not be scrolled into view

new to python and selenium.
For fun, I'm scraping a page. I have to click a first button for Comment, and then another button for All comments so I can get them all.
The first click works, but not the second.
I've set a hardcoded scroll, but still not working.
This is the python code I'm working on:
boton = driver.find_element_by_id('tabComments_btn')
boton.click()
wait = WebDriverWait(driver, 100)
from here on, it doesnt work (it scrolls but it says 'elem cant be scrolled into view'
driver.execute_script("window.scrollTo(0, 1300)")
botonTodos= driver.find_element_by_class_name('thread-node-children-load-all-btn')
wait = WebDriverWait(driver, 100)
botonTodos.click()
If I only click the first button, I'm able to scrape the first 10 comments, so this is working.
wait.until(EC.presence_of_element_located((By.CLASS_NAME, 'thread-node-message')))
for elm in driver.find_elements_by_css_selector(".thread-node-message"):
print(elm.text)
This is the part of the HTML I'm stuck in:
Load next 10 comments
Load all comments
Publicar un comentario
There's a whitespace node with the tag #text between each .
Any ideas welcome.
Thanks.
Here are the different options.
#first create the elements ref
load_next_btn = driver.find_element_by_css_selector(".thread-node-children-load-next-btn")
load_all_btn = driver.find_element_by_css_selector(".thread-node-children-load-all-btn")
# scroll to button you are interested (I am scrolling to load_all_btn
# Option 1
load_all_btn.location_once_scrolled_into_view
# Option 2
driver.execute_script("arguments[0].scrollIntoView();",load_all_btn)
# Option 3
btnLoctation = load_all_btn.location
driver.execute_script("window.scrollTo(" + str(btnLoctation['x']) + "," + str(btnLoctation['y']) +");")
Test Code:
Check if this code is working.
url = "https://stackoverflow.com/questions/55228646/python-selenium-cant-sometimes-scroll-element-into-view/55228932? noredirect=1#comment97192621_55228932"
driver.get(url)
element = driver.find_element_by_xpath("//a[.='Contact Us']")
element.location_once_scrolled_into_view
time.sleep(1)
driver.find_element_by_xpath("//p[.='active']").location_once_scrolled_into_view
driver.execute_script("arguments[0].scrollIntoView();",element)
time.sleep(1)

Selenium Web Scraping returns error when Clicking on multiple button on one page

Really Need help from this community!
When I am attempting to scrape the dynamic content from a travel website, the prices and related vendor info can be obtained only if I click on the "View Prices" button on the website. So I am considering using 'for loop' to click on all the 'View Prices' buttons before I do my scraping using Selenium.
The Question is that every single button can be click through browser.find_element_by_xpath().click() , but when I create a list which includes all the button info, an error pops up :
Code Block :
browser=webdriver.Chrome("C:/Users/Owner/Downloads/chromedriver_win32/chromedriver.exe")
url="https://www.cruisecritic.com/cruiseto/cruiseitineraries.cfm?port=122"
browser.get(url)
#print(browser.find_element_by_css_selector(".has-price.meta-link.show-column").text)
ButtonList=[ "//div[#id='find-a-cruise-full-results-container']/div/article/ul/li[3]/span[2]",
"//div[#id='find-a-cruise-full-results-container']/div/article[2]/ul/li[3]/span[2]",
"//div[#id='find-a-cruise-full-results-container']/div/article[3]/ul/li[3]/span[2]"]
for button in ButtonList:
browser.implicitly_wait(20)
browser.find_element_by_xpath(str(button)).click()
Error Stack Trace :
WebDriverException: unknown error: Element <span class="label hidden-xs-down" data-title="...">View All Prices</span> is not clickable at point (862, 12). Other element would receive the click: ...
(Session info: chrome=63.0.3239.132)
(Driver info: chromedriver=2.35.528161 (5b82f2d2aae0ca24b877009200ced9065a772e73),platform=Windows NT 10.0.16299 x86_64)
My question would be How can I click on all the buttons on the web page before scraping, or is there any other way to scrape the dynamic content on a webpage if we have to click on certain button to 'parse' the data into Python. The attached picture is the webpage screenshot.
Really appreciate the help from the community!
You might need to go for Relative path for the Xpath you are using.
It might be the case where the data which is shown is only partially present while you are performing the data.
Methods to try:
Increase the wait time
Change the xpath / use the relative xpath
Splinter - You can use it as a browser calls regular way
You need to check for the data is it present in the DOM element while making the calls. If that's the case waiting till to load the complete page will help you out.
Hello Use the Following code to click on each price button, If you want you can also introduce a implicit wait.
for one_row_view_price in browser.find_elements_by_xpath('//span[#data-title="View All Prices"]'):
one_row_view_price.click()
let me know if your BOT is able to click on price button
Thanks
Happy Coding
Here is the function which is designed on the basis of your requirement:
def click_handler(xpath):
# Find total number of element available on webpage
xpath = re.sub('"', "'", xpath)
total_element = browser.execute_script("""
var elements = document.evaluate("%s",
document,
null,
XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE,
null);
return elements.snapshotLength;
""" %xpath
)
# Check if there is any element
if(total_element):
# Iterate over all found elements
for element_pos in range(total_element):
# Call click element function
browser.execute_script("""
var elements = document.evaluate("%s",
document,
null,
XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE,
null);
var im = elements.snapshotItem(%d);
im.click();
""" %(xpath, element_pos)
)
print "***" + str(element_pos + 1) + " Elements clicked"
print "\n****************************"
print "Total " + str(total_element) + " Elements clicked"
print "****************************\n"
# Inform user that there is no element found on webpage
else:
print "\n****************************"
print "Element not found on Webpage"
print "****************************\n"
# Element not found on Webpage
click_handler('//span[#data-title="View All Prices"]')

Python & Selenium - unknown error: Element is not clickable at point (663, 469). Other element would receive the click:

I have this Selenium Code that should click on a size selection button.
submit_button = driver.find_element_by_class_name('pro_sku')
elementList = submit_button.find_elements_by_tag_name("a")
elementList[3].click()
It works for other pages but now on one page I get this error:
selenium.common.exceptions.WebDriverException: Message: unknown error: Element is not clickable at point (663, 469). Other element would receive the click:
I don't understand it because I can look at the browser window that Selenium opens and I normally can click on these buttons.
How can I solve this?
Someone asked for the website. Here it is: http://de.sinobiological.com/GM-CSF-CSF2-Protein-g-19491.html
You can use Xpath for element selection and then use the following method
# Click on Element
def element_click(self, xpath):
xpath = re.sub('"', "'", xpath)
browser.execute_script("""
var elements = document.evaluate("%s",
document,
null,
XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE,
null);
var im = elements.snapshotItem(0);
im.click();
""" %(xpath)
)
So if your x-path is correct and item is present on DOM then definitely it will get clicked.
Happy Coding
You can use action_chains to simulate mouse movment
actions = ActionChains(driver)
actions.move_to_element(elementList[3]).perform()
elementList[3].click()
Edit
The <a> tags are not the actual sizes. Try
sizes = driver.find_elements_by_class_name('size_defaut')
sizes[3].click()
Try below:-
driver.execute_script("arguments[0].click();", elementList[3])
Hope it will help you :)

Categories