Why does trying to click with selenium brings up "ElementNotInteractableException"? - python

I'm trying to click on the webpage "https://2018.navalny.com/hq/arkhangelsk/" from the website's main page. However, I get this error
selenium.common.exceptions.ElementNotInteractableException: Message:
There's nothing after "Message:"
My code
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
browser = webdriver.Firefox()
browser.get('https://2018.navalny.com/')
time.sleep(5)
linkElem = browser.find_element_by_xpath("//a[contains(#href,'arkhangelsk')]")
type(linkElem)
linkElem.click()
I think xpath is necessary for me because, ultimately, my goal is to click not on a single link but on 80 links on this webpage. I've already managed to print all the relevant links using this :
driver.find_elements_by_xpath("//a[contains(#href,'hq')]")
However, for starters, I'm trying to make it click at least a single link.
Thanks for your help,

The best way to figure out issues like this, is to look at the page source using developer tools of your preferred browser. For instance, when I go to this page and look at HTML tab of the Firebug, and look for //a[contains(#href,'arkhangelsk')] I see this:
So the link is located within div, which is currently not visible (in fact entire sub-section starting from div with id="hqList" is hidden). Selenium will not allow you to click on invisible elements, although it will allow you to inspect them. Hence getting element works, clicking on it - does not.
What you do with it depends on what your expectations are. In this particular case it looks like you need to click on <label class="branches-map__toggle-label" for="branchesToggle">Список</label> to get that link visible. So add this:
browser.find_element_by_link_text("Список").click();
after that you can click on any links in the list.

Related

How can Selenium (Python, Chrome) find web elements visible in dev tools, but not visible in page source?

I need to click the first item in the menu in a webpage with Python3 using Selenium.
I manage to log-in and navigate to the required page using Selenium, but there I get stuck: it looks like Selenium can't find any element in the page beyond the very first div in body.
I tried to find the element by ID, class, xpath, selector... The problem is probably not about that. I thought it could be about an iframe, but the content I need does not seem to be in one.
I guess that the problem is that the element I need to find is visible in the devtools, but not in the page source, so Selenium just can't see it - does this make sense? If so, can this be fixed?
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
self.driver.get("my site")
# log-in website and navigate to needed page
# [...]
# find element in page
# this works
first_div = driver.find_element(By.CSS_SELECTOR, "#app-wrapper")
# this does not work
second_div = driver.find_element(By.CSS_SELECTOR, "#app-wrapper > div.layout.flex.flex-col.overflow-x-hidden.h-display-flex.h-flex-direction-column.h-screen")
Edit
The problem is most likely due to a dynamic webpage with parts of the DOM tree attached later on by a script. I downloaded a local version of page.html, removed scripts, and successfully found the sought-after element in the local page with
from selenium import webdriver
from selenium.webdriver.common.by import By
from pathlib import Path
driver = webdriver.Chrome()
html_file = Path.cwd() / "page.html"
driver.get(html_file.as_uri())
my_element = driver.find_element(By.CSS_SELECTOR, "[title='my-title']")
The exact same driver.find_element query won't work on the online page. I'm trying to implement a waiting condition as suggested in Misc08's answer.
I guess that the problem is that the element I need to find is visible
in the devtools, but not in the page source, so Selenium just can't
see it - does this make sense? If so, can this be fixed?
No, this does not make sense, since Selenium is executing a full browser in the background like you are using when you investigate the page source with the devtools.
But you have some options to narrow your problem. The first thing you can do, is to print the source the webdriver is "seeing" in this moment:
print(driver.page_source)
If you see the elements you are looking for in the page source, than you should try to improve your selector. It is helpful to go down the DOM step by step. Look for an upper element in the page tree first. If this works, try to find the next child element, then the next child, and so on. You can check if selenium found the element like this:
try:
myelement = driver.find_element(By.CSS_SELECTOR, 'p.content')
print("Found :)")
except NoSuchElementException:
print("No found :(")
By the way, i think your CSS selector is by far to complex, just use on CSS class, not all of them:
second_div = driver.find_element(By.CSS_SELECTOR, "#app-wrapper > div.layout")
But there might be the case, where the elements you are looking for, are not present in the page source from the beginning on. Dynamic webpages getting more and more popular. In this case parts of the DOM tree are attached later on by a script. So you have to wait for the execution of the scripts, before you can find this "dynamic" elements. One dirty and unreliable option is to just add a sleep() here. Much better is to to use an explicit waiting condition, see https://selenium-python.readthedocs.io/waits.html

With selenium on python don't know how to shut down a banner which is preventing selenium from accessing the page content

I'm trying to open a site with Selenium (with Python) using Chrome browser, but when I do, a full screen promo banner immediately pops-up and I can't access the site content unless I close it.
On the top right there is an "x" as if it was a quit button, but actually it's an element ::before
and from its description it seems to me that it doesn't contain any button element.
If I operate manually, both clicking on the x and on the upper part of the page outside the banner, the latter closes, but I really don't understand how to access it with selenium.
The webpage I'm trying to open is https://sports.bwin.it/it/sports
Needless to say I'm quite inexperienced, so I hope this question won't sound too basic, but I wasn't able to find a solutione in the selenium docs or on the web; if someone could give me any hint I would appreciate it.
This is a screenshot from the page I'm talking about
This is part of the html code from the web page; the element I am talking about is the one pointed by the arrow;
Based on your screen shot the xpath you want to use would be something like this:
//*[#data-id='dj_sports_c_ovl_br']//span
full code would be something like this:
element = WebDriverWait(driver, 20).until(
EC.element_to_be_clickable((By.XPATH, "//*[#data-id='dj_sports_c_ovl_br']//span"))
)
element.click();

Unable to locate/click pop-up button with Selenium in Python

I'm using Selenium in Python 3 to access webpages, and I want to click on a pop-up button, but I am unable to locate it with Selenium.
What I'm describing below applies to a number of sites with a pop-up, so I'll use a simple example.
url = "https://www.google.co.uk"
from selenium import webdriver
driver = webdriver.Firefox()
driver.implicitly_wait(10)
driver.get(url)
The page has a pop-up for agreeing to cookies.
I want the script to click on the "I agree" button, but I'm unable to locate it.
I've found a few questions and posts about this online (including on Stackoverflow), but all the suggestions I found seem to fall in one of the following categories and don't seem to work for me.
Wait longer for the pop-up to actually load.
I've tried adding delays, and in fact, I'm testing this interactively, so I can wait all I want for the page to load before I try to locate the button, but it doesn't make any difference.
Use something like driver.switch_to.alert
I get a NoAlertPresentException. The pop-up doesn't seem to be an alert.
Locate the element using driver.find_element.
This doesn't work either, regardless of which approach I use (xpath, class name, text etc.). I can find elements from the page under the pop-up, but nothing from the pop-up itself. For example,
# Elements in main page (under pop-up)
driver.find_element_by_partial_link_text("Sign in") # returns FirefoxWebElement
driver.find_element_by_class_name("gb_g") # returns FirefoxWebElement
# Elements on the pop-up
driver.find_element_by_partial_link_text("I agree") # NoSuchElementException
driver.find_element_by_class_name("RveJvd snByac") # NoSuchElementException
The popup just doesn't seem to be there in the page source. In fact, if I try looking at the loaded page source from the browser, I can't find anything related to the pop-up. I understand that many sites use client-side scripts to load elements dynamically, so many elements wouldn't show up in the raw source, but that was the point of using Selenium: to load the page, interpret the scripts and access the end result.
So, what am I doing wrong? Where is the pop-up coming from, and how can I access it?

Python Edge Driver Web Automation Help - Cannot find Xpath

Inspect Youtube Page Element
I am new to Python and I am learning how to automate webpages. I under the basics around using the different locators under the inspect element tab to drive my code.
I have written some basic code to skip youtube ads however I am stuck on finding the correct page element to agree to the privacy policy pop up box in Youtube. I have used ChroPath to try and find the xpath of the page however there doesn't appear to be one. I was unable to locate any other page elements and I was wondering if anyone has any ideas on how I can automate the click of the 'I Agree' button?
Python Code:
from msedge.selenium_tools import Edge, EdgeOptions
options = EdgeOptions()
options.use_chromium = True
driver = Edge(options=options)
driver.get('http://www.youtube.com')
def agree():
while True:
try:
driver.find_element_by_xpath('/html/body/ytd-app/ytd-popup-container/paper-dialog/yt-upsell-dialog-renderer/div/div[3]/div[1]/yt-button-renderer/a/paper-button').click()
driver.find_elements_by_xpath('.<span class="RveJvd snByac">I agree</span>').click()
except:
continue
if __name__ == '__main__':
agree()
Youtube Inspect Element Screeshot is below:
I don't know if the xpath in your code is right as I can't see the whole html structure of the page. But you can use F12 dev tools in Edge to find the xpath and to check if the xpath you find is right:
Open the page you want to automate and open F12 dev tools in Edge.
Use Ctrl+Shift+C and click the element you want to locate and find the html code of the element.
Right click the html code and select Copy -> Copy XPath.
Then you can try to use the xpath you copy.
Besides, find_elements_by_xpath(xpath) will return a list with elements if any was found. I think you need to specify which one element of the list to click with. You need to pass in the value number of the elements list like this [x], for example:
driver.find_elements_by_xpath('.<span class="RveJvd snByac">I agree</span>')[0].click()
When inspecting the page elements I overlooked the element of iframe. After doing some digging I came across the fact I had to tell the Selenium Driver to switch from the main page to the iframe. I added the following code and now the click to the 'I Agree' button is automated:
frame_element = driver.find_element_by_id('iframe')
driver.switch_to.frame(frame_element)
agree2 = driver.find_element_by_xpath("/html/body/div/c-wiz/div[2]/div/div/div/div/div[2]/form/div/span/span").click()
driver.switch_to.default_content()

How do I loop through these web pages with selenium?

I am new to programming but am getting familiar with web-scraping.
I wish to write a code which clicks on each link on the page.
In my attempted code, I have made a sample of just two links to click on to speed things up. However, my current code is only yielding the first link to be clicked on but not the second.
from selenium import webdriver
import csv
driver = webdriver.Firefox()
driver.get("https://www.betexplorer.com/baseball/usa/mlb-2018/results/?
stage=KvfZSOKj&month=all")
matches = driver.find_elements_by_xpath('//td[#class="h-text-left"]')
m_samp = matches[0:1]
for i in m_samp:
i.click()
driver.get("https://www.betexplorer.com/baseball/usa/mlb-2018/results/?
stage=KvfZSOKj&month=all")
Ideally, I would like it to click the first link, then go back to the previous page, then click the second link, then go back to the previous page.
Any help is appreciated.
First take the all the clickable urls into one list
then iterate list
like list_urls= ["url1","url2"]
for i in list_urls:
driver.get(i)
save the all urls other wise going back and clicking will not work , because the you have only one instance of driver not the multiple

Categories