Selenium click not working in Chrome [duplicate] - python

This question already has answers here:
Filling in login forms in Instagram using selenium and webdriver (chrome) python OSX
(9 answers)
Closed 4 years ago.
I am trying to click on the anchor tag on instagram's page which says log in. Here is my code.
browser = webdriver.Chrome()
browser.get('https://instagram.com')
login_elem = browser.findElement(By.xpath('//p/a[text() = "Log in"'))
login_elem.click()
The browser opens however the element is not clicked. I have tried various other xpaths and none worked. Here is the image for the Instagram source.

This is what i tried on my local system and it worked
from selenium.webdriver.common.by import By
from selenium import webdriver
driver = webdriver.Chrome(executable_path="D:\\cs\\chromedriver.exe")
driver.get("https://www.instagram.com/")
a=driver.find_element(By.XPATH,'//a[text() = "Log in"]')
# added this step for compatibility of scrolling to the view
driver.execute_script("return arguments[0].scrollIntoView();", a)
a.click()
Corrected the XPATH and other changes.Please note executable path should be replaced with your path.

You can use below Xpath
.//*[#id='react-root']/section/main/article/div[2]/div[2]/p/a
or CSS Selector
.izU2O>a
or Link Text
Log in

try this code :
element = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.PARTIAL_LINK_TEXT, "Log in")))
element.click()

Related

unable to capture web element having _ngcontent-c6 in selenium with python [duplicate]

This question already has answers here:
Select iframe using Python + Selenium
(7 answers)
Closed 1 year ago.
I want to capture the web element highlighted in the below screenshot:
I have already tried following options (using absolute as well as relative path):
submit = driver.find_element_by_xpath("html/body/vra-root/vra-shell/clr-main-container/vra-tabs/nav/ul/li[2]/a").click()
submit = driver.find_element_by_xpath("//ul[#class='nav']//li[#class='nav-item ng-star-inserted']//a[#id='csp.cs.ui.deployment'] and contains [text()='Deployments']").click()
submit = driver.find_element_by_xpath("//a[text()='Deployments']").click()
content = driver.find_element_by_css_selector('a.nav-link').click()
But, everytime I am getting the follwing error message`NoSuchElementException: Message: no such element: Unable to locate element:
I am new to this, any help is appreciated!`
This looks like in an iframe, if yes then you can switch it to iframe first like this :
WebDriverWait(driver, 10).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH,"iframe xpath")))
and then click :
driver.find_element_by_xpath("//a[text()='Deployments']").click()
If it's not an iframe issue as #cruisepandy described then try adding a wait around it to see if that helps
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
wait = WebDriverWait(driver, 10)
submit = wait.until(EC.element_to_be_clickable((By.XPATH, "//a[text()='Deployments']")))
submit.click()

Python Selenium "Unable to locate element"

I am trying to use Selenium to sign up an email account automatically whenever I need to. It's just a fun learning project for me. For the life of me I don't understand why it can't find the element. This code works fine on the sign-in page but not the sign-up page. I have tried all different Selenium commands and even tried using the ID and class name. Either is says it can't locate the element or that it is not reachable by keyboard.
from selenium import webdriver
from selenium.webdriver.firefox.options import Options
import time
options = Options()
driver = webdriver.Firefox(options=options, executable_path=r'geckodriver.exe')
driver.get("https://mail.protonmail.com/create/new?language=en")
time.sleep(10)
username_input = driver.find_element_by_id("username").send_keys("testusername")
Also here is the HTML code: https://i.imgur.com/ZaBMTzG.png
The username field is in iframe, you need to switch to iframe to make this work.
Below is the code that works fine :
driver.get("https://mail.protonmail.com/create/new?language=en")
driver.switch_to.frame(driver.find_element_by_css_selector("iframe[title='Registration form'][class='top']"))
driver.find_element_by_id("username").send_keys("some string")
read more about iframe here
learn more about how to switch to iframe/frame/framset using Python
selenium Bindings here
Update :
wait = WebDriverWait(driver, 30)
driver.get("https://mail.protonmail.com/create/new?language=en")
driver.switch_to.frame(driver.find_element_by_css_selector("iframe[title='Registration form'][class='top']"))
driver.find_element_by_id("username").send_keys("some string")
driver.switch_to.default_content()
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
sleep(5)
driver.switch_to.frame(driver.find_element_by_css_selector("iframe[title='Registration form'][class='bottom']"))
wait.until(EC.element_to_be_clickable((By.NAME, "submitBtn"))).click()
I'm not sure if I've seen enough code to diagnose, but I think the way you are defining username_input seems problematic. driver.find_element_by_id("username").send_keys("testusername") doesn't actually return anything so it seems like you are setting username_input = null.

Element not Clickable ( Chrome + Selenium + Python)

I am using Chromewebdriver /Selenium in Python
I tried several solutions ( actions, maximize window etc) to get rid of this exception without success.
The error is :
selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element ... is not clickable at point (410, 513). Other element would receive the click: ...
The code :
from selenium import webdriver
import time
url = 'https://www.tmdn.org/tmview/welcome#/tmview/detail/EM500000018203824'
driver = webdriver.Chrome(executable_path = "D:\Python\chromedriver.exe")
driver.get(url)
time.sleep(30)
driver.find_element_by_link_text('Show more').click()
I tested this code on my linux pc with latest libraries, python3 and chromedriver. It works perfectly(to my mind). So try to update everything and try again(Try to not to leave chrome). Here is the code:
from selenium import webdriver
import time
url = 'https://www.tmdn.org/tmview/welcome#/tmview/detail/EM500000018203824'
driver = webdriver.Chrome(executable_path = "chromedriver")
driver.get(url)
time.sleep(30)
driver.find_element_by_link_text('Show more').click()
P.S. chromedriver is in the same folder as script.
Thank you for your assistance.
Actually, the issue was the panel on the footer of the web page 'We use cookies....' which is overlapping with the 'Show more' link, when the driver tries to click, the click was intercepted by that panel.
The solution is to close that panel, and the code worked fine.
code is working fine but if you manually click on some other element after page loaded before sleep time is over then you can recreate same error
for example after site is loaded and i clicked on element search for trade mark with similar image then selenium is not able to find search element.so maybe some other part of your code is clicking on other element and loading different url due to which
selenium is generating this error .your code is fine just check for conflict cases.

How to get all links on a web page using python and selenium IDE

I want to get all link from a web page using selenium ide and python.
For example if I search test or anything on google website and I want all link related to that.
Here is my code
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
baseurl="https://www.google.co.in/?gws_rd=ssl"
driver = webdriver.Firefox()
driver.get(baseurl)
driver.find_element_by_id("lst-ib").click()
driver.find_element_by_id("lst-ib").clear()
driver.find_element_by_id("lst-ib").send_keys("test")
link_name=driver.find_element_by_xpath(".//*[#id='rso']/div[2]/li[2]/div/h3/a")
print link_name
driver.close()
Output
<selenium.webdriver.remote.webelement.WebElement object at 0x7f0ba50c2090>
Using xpath $x(".//*[#id='rso']/div[2]/li[2]/div/h3/a") in Firebug's console.
Output
[a jtypes2.asp]
How can I get links content from a object.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
baseurl="https://www.google.co.in/?gws_rd=ssl"
driver = webdriver.Firefox()
driver.get(baseurl)
driver.find_element_by_id("lst-ib").click()
driver.find_element_by_id("lst-ib").clear()
driver.find_element_by_id("lst-ib").send_keys("test")
driver.find_element_by_id("lst-ib").send_keys(Keys.RETURN)
driver.implicitly_wait(2)
link_name=driver.find_elements_by_xpath(".//*[#id='rso']/div/li/div/h3/a")
for link in link_name:
print link.get_attribute('href')
Try the above code. Your code doesn't send a RETURN key after giving the search keyword. Also I've made changes to implicitly wait for 2 seconds to load the search results and I've changed xpath to get all links.

Splinter or Selenium: Can we get current html page after clicking a button?

I'm trying to crawl the website "http://everydayhealth.com". However, I found that the page will dynamically rendered. So, when I click the button "More", some new news will be shown. However, using splinter to click the button doesn't let "browser.html" automatically changes to the current html content. Is there a way to let it get newest html source, using either splinter or selenium? My code in splinter is as follows:
import requests
from bs4 import BeautifulSoup
from splinter import Browser
browser = Browser()
browser.visit('http://everydayhealth.com')
browser.click_link_by_text("More")
print(browser.html)
Based on #Louis's answer, I rewrote the program as follows:
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
driver = webdriver.Firefox()
driver.get("http://www.everydayhealth.com")
more_xpath = '//a[#class="btn-more"]'
more_btn = WebDriverWait(driver, 10).until(lambda driver: driver.find_element_by_xpath(more_xpath))
more_btn.click()
more_news_xpath = '(//a[#href="http://www.everydayhealth.com/recipe-rehab/5-herbs-and-spices-to-intensify-flavor.aspx"])[2]'
WebDriverWait(driver, 5).until(lambda driver: driver.find_element_by_xpath(more_news_xpath))
print(driver.execute_script("return document.documentElement.outerHTML;"))
driver.quit()
However, in the output text, I still couldn't find the text in the updated page. For example, when I search "Is Milk Your Friend or Foe?", it still returns nothing. What's the problem?
With Selenium, assuming that driver is your initialized WebDriver object, this will give you the HTML that corresponds to the state of the DOM at the time you make the call:
driver.execute_script("return document.documentElement.outerHTML;")
The return value is a string so you could do:
print(driver.execute_script("return document.documentElement.outerHTML;"))
When I use Selenium for tasks like this, I know browser.page_source does get updated.

Categories