Selenium not following click that opens a new tab - python

For the longest time I had an issue with selenium finding a button I was looking for on the page. After a week I got the "brilliant" idea to check the url that selenium was searching the button for. Dummy me, it was the wrong URL.
So the issue is, selenium searches page1 for a specific item. It then clicks it, and by the websites design, it opens page2 in a new tab. How can I get selenium to follow the click and work on the new tab?
I though of using beautiful soup to just copy the url from page1, but the website doesn't show the urls. Instead it shows functions that get the urls. It's really weird and confusing.
Ideas?
all_matches = driver.find_elements_by_xpath("//*[text()[contains(., 'Pink')]]")
item = all_matches[0]
actions.move_to_element(item).perform()
item.click()
try:
print (driver.current_url)
get_button = driver.find_elements_by_xpath('//*[#id="getItem"]')
except:
print 'Cant find button'
else:
actions.move_to_element(get_button).perform()
get_button.click()

Selenium treats tabs like windows, so generally, switching to new window/tab is as easy as:
driver.switch_to.window(driver.window_handles[-1])
You may find it helpful to keep track of the windows with vars:
main_window = driver.current_window_handle
page2_window = driver.window_handles[-1]
driver.switch_to.window(page2_window)
Note that when you want to close the new window/tab, you have to both close & switch back:
driver.close()
driver.switch_to.window(main_window)

Related

bypass youtube account pop-up with selenium in python

i have tried to get past this pseudo pop up in the following ways:
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.ID , 'introAgreeButton'))).click()
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH , '//*[#id="introAgreeButton"]'))).click()
none of the ways described seem to work.
is there any way to bypass this annoying pop-up ?
Ive came across on same problem. The solution is:
try:
driver._switch_to.frame("iframe")
aggreeButton = driver.find_element_by_id("introAgreeButton")
aggreeButton.click()
except:
print("failed")
to move out the current frame to the page level
driver.switch_to.default_content()
This answer is updated for July 2022
if 'consent.youtube.com' in driver.current_url: # THIS IS THE CHECK
driver.find_element_by_xpath('//button[#class="VfPpkd-LgbsSe VfPpkd-LgbsSe-OWXEXe-k8QpJ VfPpkd-LgbsSe-OWXEXe-dgl2Hf nCP5yc AjY5Oe DuMIQc LQeN7 IIdkle"]').click()
This will skip the 'Before you continue to Youtube screen'
I can't get that pop-up to occur on my attempts when I navigate to YouTube; but, I suspect that this is a dialog or alert. If you check the Elements tab using the Chrome Dev Tools ( F12 ), that should tell you. Then, use the find_element by xpath to click on "I agree"
UPDATED - Modified XPATH to the iron-overlay-backdrop tag
driver.find_element(By.XPATH, "//iron-overlay-backdrop[#class='opened']//button[text()='I agree']").click()
After that, wait for the modal / dialog to disappear or be removed from the DOM and continue doing what you're doing

Python Selenium- Cant find correct element. Login Button

#Thanks in advance for help. New to python, tried for hour trying to correct mistake.#
Trying to locate login button element. Attached is the image of the website with the element of the login button. please see here
Below is code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
url = "https://www.xxxxxx.com/index.php/admin/index"
username = 'xxxx'
password = 'xxxxx'
driver = webdriver.Firefox(executable_path="C:\\Users\\kk\\AppData\\Local\\Programs\\Python\\Python38-32\\geckodriver.exe")
driver.get(url)
driver.find_element_by_name(name='aname').send_keys(username)
driver.find_element_by_name(name='apass').send_keys(password)
driver.find_elements_by_xpath('//style[#type="submit"]')
Rather than finding it with a CSS selector. Why not use find_element_by_xpath()
To get the XPath of that element just right-click the HTML of the input in Inspect Element, hover over Copy and you'll see "Full XPath"
Issue is your xpath.
driver.find_elements_by_xpath('//style[#type="submit"]')
Use below:
driver.find_elements_by_xpath('//input[#type="submit"]')
or
driver.find_elements_by_xpath('//input[#value="login"]')
#This is more accurate as many input tags could have type as submit
Also, please use some sort of wait as i am not sure if page will be loading fast enough every time you launch URL.
You can identify the submit button by using any of these 2:
//input[#type="submit"] or //input[#value="login"]
They should work without any problem if you don't have any similar elements on your page (which I doubt)
But if you want to be more precise, you can mix these 2 into:
//input[#value="login" and #type="submit"]

Selenium Webdriver failing to click. Unsure why

I have the following code;
if united_states_hidden is not None:
print("Country removed successfully")
time.sleep(10)
print("type(united_states_hidden) = ")
print(type(united_states_hidden))
print("united_states_hidden.text = " + united_states_hidden.text)
print("united_states_hidden.id = " + united_states_hidden.id)
print(united_states_hidden.is_displayed())
print(united_states_hidden.is_enabled())
united_states_hidden.click()
The outputs to the console are as follows:
Country removed successfully
type(united_states_hidden) =
<class 'selenium.webdriver.remote.webelement.WebElement'>
united_states_hidden.text = United States
united_states_hidden.id = ccea7858-6a0b-4aa8-afd5-72f75636fa44
True
True
As far as I am aware this should work as it is a clickable web element, however, no click is delivered to the element. Any help would be appreciated as I can't seem to find anything anywhere else. The element I am attempting to click is within a selector box.
Seems like a valid WebElement given you can print all of the info. like you did in your example.
It's possible the element located is not the element that is meant to be clicked, so perhaps the click is succeeding, but not really clicking anything.
You could try using a Javascript click and see if that helps:
driver.execute_script("arguments[0].click();", united_states_hidden)
If this does not work for you, we may need to see the HTML on the page and the locator strategy you are using to find united_states_hidden so that we can proceed.

Can't get "WebDriver" element data if not "eye-visible" in browser using Selenium and Python

I'm doing a scraping with Selenium in Python. My problem is that after I found all the WebElements, I'm unable to get their info (id, text, etc) if the element is not really VISIBLE in the browser opened with Selenium.
What I mean is:
First image
Second image
As you can see from the first and second images, I have the first 4 "tables" that are "visible" for me and for the code. There are however, other 2 tables (5 & 6 Gettho lucky dip & Sue Specs) that are not "visible" until I drag down the right bar.
Here's what I get when I try to get the element info, without "seeing it" in the page:
Third image
Manually dragging the page to the bottom and therefore making it "visible" to the human eye (and also to the code ???) is the only way I can the data from the WebDriver element I need:
Fourth image
What am I missing ? Why Selenium can't do it in background ? Is there a manner to solve this problem without going up and down the page ?
PS: the page could be any kind of dog race page in http://greyhoundbet.racingpost.com/. Just click City - Time - and then FORM.
Here's part of my code:
# I call this function with the URL and it returns the driver object
def open_main_page(url):
chrome_path = r"c:\chromedriver.exe"
driver = webdriver.Chrome(chrome_path)
driver.get(url)
# Wait for page to load
loading(driver, "//*[#id='showLandingLADB']/h4/p", 0)
element = driver.find_element_by_xpath("//*[#id='showLandingLADB']/h4/p")
element.click()
# Wait for second element to load, after click
loading(driver, "//*[#id='landingLADBStart']", 0)
element = driver.find_element_by_xpath("//*[#id='landingLADBStart']")
element.click()
# Wait for main page to load.
loading(driver, "//*[#id='whRadio']", 0)
return driver
Now I have the browser "driver" which I can use to find the elements I want
url = "http://greyhoundbet.racingpost.com/#card/race_id=1640848&r_date=2018-
09-21&tab=form"
browser = open_main_page(url)
# Find dog names
names = []
text: str
tags = browser.find_elements_by_xpath("//strong")
Now "TAGS" is a list of WebDriver elements as in the figures.
I'm pretty new to this area.
UPDATE:
I've solved the problem with a code workaround.
tags = driver.find_elements_by_tag_name("strong")
for tag in tags:
driver.execute_script("arguments[0].scrollIntoView();", tag)
print(tag.text)
In this manner the browser will move to the element position and it will be able to get its information.
However I still have no idea why with this page in particular I'm not able to read webpages elements that are not visible in the Browser area untill I scroll and literally see them.

Selenium + Python: StaleElementReferenceException (selecting by class)

I'm trying to write a simple Python script using Selenium, and while the loop runs once, I'm getting a StaleElementReferenceException.
Here's the script I'm running:
from selenium import webdriver
browser = webdriver.Firefox()
type(browser)
browser.get('http://digital2.library.ucla.edu/Search.do?keyWord=&selectedProjects=27&pager.offset=50&viewType=1&maxPageItems=1000')
links = browser.find_elements_by_class_name('searchTitle')
for link in links:
link.click()
print("clicked!")
browser.back()
I did try adding browser.refresh() to the loop, but it didn't seem to help.
I'm new to this, so please ... don't throw stuff at me, I guess.
It does not make sense to click through links inside of a loop. Once you click the first link, then you are no longer on the page where you got the links. Does that make sense?
Take the for loop out, and add something like link = links[0] to grab the first link, or something more specific to grab the specific link you want to click on.
If the intention is to click on every link on the page, then you can try something like the following:
links = browser.find_elements_by_class_name('searchTitle')
for i in range(len(links)):
links = browser.find_elements_by_class_name('searchTitle')
link = links[i] # specify the i'th link on the page
link.click()
print("clicked!")
browser.back()
EDIT: This might also be as simple as adding a pause after the browser.back(). You can do that with the following:
from time import sleep
...
sleep(5) # specify pause in seconds

Categories