I'm trying a simple browser operation where I locate a username element on a website and then try to login. I'm using selenium and python to do this. Here's some simple code that works on my own local machine. The code opens a browser on my computer and then navigates to the correct username box and enters the username.
from selenium import webdriver
browser = webdriver.Firefox()
browser.get(my_url)
username_element = browser.find_elements_by_name("USERNAME")[0]
username_element.clear()
username_element.send_keys(my_username)
However, when I try to deploy the same code on an AWS server using pyvirtualdisplay so that Firefox doesn't need to pop up, it no longer works.
from pyvirtualdisplay import Display
from selenium import webdriver
display = Display(visible=0, size=(800, 600))
display.start()
browser = webdriver.Firefox()
browser.get(my_url)
username_element = browser.find_elements_by_name("USERNAME")[0]
username_element.clear()
username_element.send_keys(my_username)
The element is definitely found, but I get the element not visible error:
selenium.common.exceptions.ElementNotVisibleException: Message: Element is not currently visible and so may not be interacted with
which is confirmed by:
>> username_element.is_displayed()
False
I've tried various things I found on SO including:
making sure xvfb and xephyr are installed
adding a browser.implicitly_wait(30)
trying a WebDriverWait(browser,30).until(EC.visibility_of_element_located((By.NAME, "USERNAME"))) which times out
Any ideas on how to solve this?
You can scroll screen:
browser.execute_script("window.scrollTo(0, 600)")
Figured it out after taking a screenshot. Turns out my screen display wasn't set large enough. Changing the display size to (1600,900) solved the problem.
Related
I am using Chromewebdriver /Selenium in Python
I tried several solutions ( actions, maximize window etc) to get rid of this exception without success.
The error is :
selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element ... is not clickable at point (410, 513). Other element would receive the click: ...
The code :
from selenium import webdriver
import time
url = 'https://www.tmdn.org/tmview/welcome#/tmview/detail/EM500000018203824'
driver = webdriver.Chrome(executable_path = "D:\Python\chromedriver.exe")
driver.get(url)
time.sleep(30)
driver.find_element_by_link_text('Show more').click()
I tested this code on my linux pc with latest libraries, python3 and chromedriver. It works perfectly(to my mind). So try to update everything and try again(Try to not to leave chrome). Here is the code:
from selenium import webdriver
import time
url = 'https://www.tmdn.org/tmview/welcome#/tmview/detail/EM500000018203824'
driver = webdriver.Chrome(executable_path = "chromedriver")
driver.get(url)
time.sleep(30)
driver.find_element_by_link_text('Show more').click()
P.S. chromedriver is in the same folder as script.
Thank you for your assistance.
Actually, the issue was the panel on the footer of the web page 'We use cookies....' which is overlapping with the 'Show more' link, when the driver tries to click, the click was intercepted by that panel.
The solution is to close that panel, and the code worked fine.
code is working fine but if you manually click on some other element after page loaded before sleep time is over then you can recreate same error
for example after site is loaded and i clicked on element search for trade mark with similar image then selenium is not able to find search element.so maybe some other part of your code is clicking on other element and loading different url due to which
selenium is generating this error .your code is fine just check for conflict cases.
The purpose of this script is to scrape info from my work schedule. The full script works fine when I run it on my windows laptop but when I try to run on raspian it appears the click.() on the "display_but" variable is not doing its job.The page pulls up fine and logs with no problem, and it even selects an option from a dropdown with no problem. Only when clicking the display button does an error seem to occur. The object is being found as if I print it I get a selenium web object. There are not error messages. When I use drop.click() this appears to work as correct option from dropdown is being selected. I am lost
Below are the workarounds I have tried.
1. Using Keys module to tab to the button and then submitting.
- this results in the correct button being selected but when I "press enter" using keys nothing happens.
2. I tried waiting for element to be clickable using WebDriverWait, expected conditions, and By modules
- this method also works on my windows but not on raspian
3. I have tried adding implicit waits and time.sleep
- these methods did not seem to help
Below is my code
import time
from selenium import webdriver
driver = webdriver.Chrome(executable_path="/Users/Sanch/Desktop/Drivers/chromedriver")
url = 'website'
driver.get(url)
#logs into account
username_xpath = '//*[#id="usernameInputField"]'
password_xpath = '//*[#id="passwordInputField"]'
login_xpath = '//*[#id="submitButton"]/span/input'
user_name = driver.find_element_by_xpath(username_xpath)
user_name.send_keys('username')
password = driver.find_element_by_xpath(password_xpath)
password.send_keys('password')
password.submit()
#selects option from dropdown
drop_xpath ='/html/body/associate/div/view-userschedule/div/div/div[2]/div/div[1]/select/option[2]'
drop = driver.find_element_by_xpath(drop_xpath)
drop.click()
time.sleep(3)
#clicks display button
Clicks display button (shows whatever selected in dropdown)
display_but_xpath = '/html/body/associate/div/view-userschedule/div/div/div[2]/div/div[3]/button'
display_but = driver.find_element_by_xpath(display_but_xpath)
display_but.click()
You should probably do as much of that from the browser context as possible. For example:
driver.execute_script("document.querySelector('[id=usernameInputField]').value = 'user'")
driver.execute_script("document.querySelector('[id=passwordInputField]').value = 'password'")
driver.execute_script("document.querySelector('css-for-button').click()")
Solved the problem by running the script with headless chrome instead of regular chrome. Using the code below in place of "driver = webdriver.Chrome(executable_path="/Users/Sanch/Desktop/Drivers/chromedriver")" made it so that the script ran properly.
I am not sure if it was due to the lack of computing power with raspberry pi 3+ or some other factor but everything is working properly now. Maybe someone else can shed light on why headless would work but regular chrome wouldn't. Thanks for the help everyon!
#headless driver setup and launch
chrome_options = Options()
chrome_options.add_argument("--headless")
chrome_options.add_argument("--window-size=1920x1080")
chrome_driver = "your drivers path"
driver = webdriver.Chrome(chrome_options=chrome_options, executable_path=chrome_driver)
The aim of this is to open a browser window and save the site as PDF.
I'm writing Python code that:
1) Opens a web page
2) Does a control-p to bring up the print dialog box
NOTE: I will have pre-configured the browser to save as PDF instead of defaulting as printing to a printer)
3) Does "return"
4) Enters the file name
5) Does "return" again
NOTE: In my full code, I'll be doing these steps hundreds of times
I'm having a problem early on with control-p. As a test, I'm able to send dummy text to Google's search, but I can't seem to be able to send a control-p (no error messages). I'm using Google as an easy example, but my final code will use various other sites.
Obviously I'm missing something but just can't figure out what.
I tried an alternate method of using javascript instead of ActionChains:
driver.execute_script('window.print();')
This worked in getting the print dialog but I wasn't able to feed anything else in that dialog box (like , file name and location for the pdf).
I tried PDFkit, to convert the web page into pdf. It worked on some sites, but it crashed often (depending on what the site returned), the page was sometimes poorly formatted and some sites (pinterest) just didn't render at all. For this reason, I changed method and decided to use selenium and Chrome in order for the pdf to render just like it shows in the browser.
I thought about using "element.send_keys("some text")" instead of ActionChains, but since I'm going across multiple different web sites, I don't necessarily know what element to look for.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
import time
DRIVER = 'chromedriver'
driver = webdriver.Chrome(DRIVER)
URL = "http://www.google.com"
driver.get(URL)
actions = ActionChains(driver)
time.sleep(5) #Give the page time to load
actions.key_down(Keys.CONTROL)
actions.send_keys('p')
actions.key_up(Keys.CONTROL)
actions.perform()
time.sleep(5) #Sleep to see if the print dialog came up
driver.quit()
You can use autoit to achieve your requirement.
First do pip install -U pyautoit
from selenium import webdriver
import autoit
import time
DRIVER = 'chromedriver'
driver = webdriver.Chrome(DRIVER)
driver.get('http://google.com')
driver.maximize_window()
time.sleep(10)
autoit.send("^p")
time.sleep(10) # Pause to allow you to inspect the browser.
driver.quit()
Please let me know if it's working.
try this:
webdriver.ActionChains(driver).key_down(Keys.CONTROL).send_keys('P').key_up
(Keys.CONTROL).perform()
check this out :
robot.keyPress(KeyEvent.VK_CONTROL)
robot.keyPress(KeyEvent.VK_P)
// CTRL+P is now pressed
robot.keyRelease(KeyEvent.VK_P)
robot.keyRelease(KeyEvent.VK_CONTROL)
I'm trying to use python-selenium script to click on "Sign In" label on a top right corner of the gmail main page.I had used firebug/firepath to find the correct xpath for this class and it seems to be working fine while using browser tools but failed when scripts tries to find same element using xpath. I would greatly appreciate if you can point me to the right direction.Thank you!
Url: https://www.google.com/gmail/about/
PS:I'm relative new to selenium . So please excuse my ignorance if I'm approaching this issue in a wrong manner.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from pyvirtualdisplay import Display
from selenium import webdriver
display = Display(visible=0, size=(1920, 1080))
display.start()
browser = webdriver.Firefox()
browser.get('https://www.gmail.com')
print (browser.title)
g_login=browser.find_element_by_xpath("//a[#class='gmail-nav__nav-link gmail-nav__nav-link__sign-in']")
g_login.click()
You should use the same url you provide above in your post :
browser.get('https://www.google.com/gmail/about/')
Looks like the other redirects to another url and makes the request fail.
I expect to be able to save a page and then use a lxml.html.parse() but I was wondering if I could do it directly off a opened page?
I'm using Ubuntu if it makes any difference.
Edit: There's a method to use xpath directly(find_element_by_xpath), so I guess I don't need lxml. But to save the page all you have to do is call the page_source method.
To answer the 'use Selenium without spawning a visible window' question, yes you can use PyVirtualDisplay on Ubunutu easily.
from pyvirtualdisplay import Display
from selenium import webdriver
display = Display(visible=0, size=(800, 600))
display.start()
# now Firefox will run in a virtual display.
# you will not see the browser.
browser = webdriver.Firefox()
browser.get('http://www.google.com')
print browser.title
browser.quit()
display.stop()
Code is from this blog post