Hidden HTML Elements in Selenium's web driver(python) - python

I'm writing test scripts for a web page in python using Selenium's remote control interface.
I'm writing it like this:
elem = browser.find_element_by_link_text("foo")
elem.click()
elem = browser.find_element_by_name("goo")
elem.send_keys("asdf")
elem = browser.find_element_by_link_text("foo2")
elem.click()
It then needs to select an item in a list. The list becomes visible when the mouse hovers over it, but selenium cannot find the element if it's hidden. The list also shows options based on who is logged in. The list is implemented in CSS, so trying to run it in javascript and using gettext() does not work.
I've tried searching for the link based on name, class and xpath, but it always reports that it is not visible I've verified from browser.page_source() that the link is in the source code, so it's reading the correct page.
How do I select the link inside the list? Any help is appreciated.

Selenium and :hover css suggests that this can't be done using the Selenium RC interface, but instead must be done using the WebDriver API

Try move_to_element(). Check out the API http://readthedocs.org/docs/selenium-python/en/latest/api.html

Related

How can I click on 'Show more matches' on the Flashscore website using Selenium library in Python to scrape hidden information?

I am working on scraping data from the Flashscore website.
https://www.flashscore.com/football/albania/superliga-2019-2020/results/
Although I can find the links for most of the matches that are visible once the above page loads, there are many matches that are hidden and can only be accessed by clicking on 'Show more matches'.
Snapshot of the page
I found the class for 'Show more matches' (event__more event__more--static) and used the '.click()' method of the selenium library in Python but the output is null. Also, I tried various other implementations of clicking this link but couldn't get it working.
Is there any other way I can click on the link and extract the information in Python? Any help would be greatly appreciated.
Note: I also haven't found any classes where all of this information is hidden.
You can use the execute_script() driver method to achieve this. It's used for executing JavaScript in the current window/frame.
You can find the code snippet below-
driver.get('https://www.flashscore.com/football/albania/superliga-2019-2020/results/')
show_more_button=driver.find_element_by_xpath('//*[#id="live-table"]/div[1]/div/div/a') #find the show more results element
driver.execute_script("arguments[0].click();", show_more_button)

Python Edge Driver Web Automation Help - Cannot find Xpath

Inspect Youtube Page Element
I am new to Python and I am learning how to automate webpages. I under the basics around using the different locators under the inspect element tab to drive my code.
I have written some basic code to skip youtube ads however I am stuck on finding the correct page element to agree to the privacy policy pop up box in Youtube. I have used ChroPath to try and find the xpath of the page however there doesn't appear to be one. I was unable to locate any other page elements and I was wondering if anyone has any ideas on how I can automate the click of the 'I Agree' button?
Python Code:
from msedge.selenium_tools import Edge, EdgeOptions
options = EdgeOptions()
options.use_chromium = True
driver = Edge(options=options)
driver.get('http://www.youtube.com')
def agree():
while True:
try:
driver.find_element_by_xpath('/html/body/ytd-app/ytd-popup-container/paper-dialog/yt-upsell-dialog-renderer/div/div[3]/div[1]/yt-button-renderer/a/paper-button').click()
driver.find_elements_by_xpath('.<span class="RveJvd snByac">I agree</span>').click()
except:
continue
if __name__ == '__main__':
agree()
Youtube Inspect Element Screeshot is below:
I don't know if the xpath in your code is right as I can't see the whole html structure of the page. But you can use F12 dev tools in Edge to find the xpath and to check if the xpath you find is right:
Open the page you want to automate and open F12 dev tools in Edge.
Use Ctrl+Shift+C and click the element you want to locate and find the html code of the element.
Right click the html code and select Copy -> Copy XPath.
Then you can try to use the xpath you copy.
Besides, find_elements_by_xpath(xpath) will return a list with elements if any was found. I think you need to specify which one element of the list to click with. You need to pass in the value number of the elements list like this [x], for example:
driver.find_elements_by_xpath('.<span class="RveJvd snByac">I agree</span>')[0].click()
When inspecting the page elements I overlooked the element of iframe. After doing some digging I came across the fact I had to tell the Selenium Driver to switch from the main page to the iframe. I added the following code and now the click to the 'I Agree' button is automated:
frame_element = driver.find_element_by_id('iframe')
driver.switch_to.frame(frame_element)
agree2 = driver.find_element_by_xpath("/html/body/div/c-wiz/div[2]/div/div/div/div/div[2]/form/div/span/span").click()
driver.switch_to.default_content()

How to use scrapy to click on element and return JS

I am trying to scrape names and contact details from this page https://www.realestate.com.au/find-agent/agents/sydney-cbd-nsw. I normally want to click into each of the list items and get the information from the resulting page, but there is no href to follow.
I'm presuming that the class type somehow points to some JS codes. When the list item is clicked the JS redirects you to the new url. Can I get at it somehow using Scrapy?
Note: I don't know much about JS
This will give you all the links you need without JS rendering.
response.css('script::text').re('"url":"(.+?)"')
Don't use Chrome for scraping until there's no other way. It's really bad practice.
I'd recommend using Selenium which will automate an instance of an actual browser. This means that sessions, cookies, javascript execution, etc. is all handled for you.
Example:
from selenium import webdriver
driver = webdriver.Chrome()
driver.get("http://example.com")
button = driver.find_element_by_id('buttonID')
button.click()

Why does trying to click with selenium brings up "ElementNotInteractableException"?

I'm trying to click on the webpage "https://2018.navalny.com/hq/arkhangelsk/" from the website's main page. However, I get this error
selenium.common.exceptions.ElementNotInteractableException: Message:
There's nothing after "Message:"
My code
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
browser = webdriver.Firefox()
browser.get('https://2018.navalny.com/')
time.sleep(5)
linkElem = browser.find_element_by_xpath("//a[contains(#href,'arkhangelsk')]")
type(linkElem)
linkElem.click()
I think xpath is necessary for me because, ultimately, my goal is to click not on a single link but on 80 links on this webpage. I've already managed to print all the relevant links using this :
driver.find_elements_by_xpath("//a[contains(#href,'hq')]")
However, for starters, I'm trying to make it click at least a single link.
Thanks for your help,
The best way to figure out issues like this, is to look at the page source using developer tools of your preferred browser. For instance, when I go to this page and look at HTML tab of the Firebug, and look for //a[contains(#href,'arkhangelsk')] I see this:
So the link is located within div, which is currently not visible (in fact entire sub-section starting from div with id="hqList" is hidden). Selenium will not allow you to click on invisible elements, although it will allow you to inspect them. Hence getting element works, clicking on it - does not.
What you do with it depends on what your expectations are. In this particular case it looks like you need to click on <label class="branches-map__toggle-label" for="branchesToggle">Список</label> to get that link visible. So add this:
browser.find_element_by_link_text("Список").click();
after that you can click on any links in the list.

How can Selenium (or BeautifulSoup) be used to access these hidden elements?

Here is an example page with pagination controlling dynamically loaded results.
http://www.rehabs.com/local/jacksonville-fl/
All that I presently know to try is:
curButton = 1
driver.find_element_by_css_selector('ul[class="pagination"]').find_elements_by_tag_name('li')[curButton].click()
Nothing seems to happen (also when trying to access and click the a tag or driver.get() the href of the a element).
Is there another way to access the hidden elements? For instance, when reading the html of the entire page, the elements of different pagination are shown, but are apparently inaccessible with BeautifulSoup.
Pagination was added for humans. Maybe you used the wrong xpath or css. Check it.
Use this xpath:
//div[#id="listing-basic"]/article/div[#class="h3"]/a/#href
You can click on the pagination button using:
driver.find_elements_by_css_selector('.pagination li a')[1].click()

Categories