Switch (internal) tab using Selenium - python

I've been trying to access elements of a website using Selenium but keep getting this message :
selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable
The problem I'm having seems to be that the website uses the same classes' name between two tabs, and just changes the visibility of one to display the other. For instance, when the first tab is displayed the first div's visibility is visible and the second is hidden. If the second tab is displayed, then the second div is visible, and the first hidden.
For the first tab, I have no problem accessing the desired elements. However, if the second tab is displayed, I get the above error message.
Is there a way to switch between to div like one would do between two iframe ?
This is what my code looks like for the moment :
driver.get(url) #get url
driver.find_element(By.XPATH, "//input[#id='ctl00_cphMaster_ConsultationCitoyenCriteresRecherche_CritereRechercheTabContainer_RechercheSimpleTabPanel_RechercheSimpleControl_MotsClesTextBox']").send_keys(string) #enter searched term
driver.find_element(By.XPATH, "//input[#id='ctl00_cphMaster_ConsultationCitoyenCriteresRecherche_CritereRechercheTabContainer_RechercheSimpleTabPanel_RechercheSimpleControl_PorteeRadioButtonList_1']").click()
driver.find_element(By.XPATH, "//input[#class='Bouton BoutonRechercher']").click() #click the search button
WebDriverWait(driver, 20).until(EC.presence_of_element_located((By.XPATH,"//div[#class='Header']")))
driver.find_element(By.XPATH, '//*[#id="__tab_ctl00_cphMaster_ConsultationCitoyenResultatsRecherche_ResultatsRechercheTabContainer_RechercheLETabPanel"]').click() #switch tab
WebDriverWait(driver, 20).until(EC.presence_of_element_located((By.XPATH,"//div[#class='Header']")))
while True:
listings = driver.find_elements_by_class_name("InscriptionSummaryValue")
c = 1
for i in listings:
page = int(driver.find_element(By.XPATH, "//span[#class='CurrentPage']").text) #can't find the element because it seems to be located in the first div

Related

Selenium cannot find class [duplicate]

I'm trying to automate a process on this page, and according to its html code, after clicking the wallet button located at the top right corner of that page, it deploys 4 main wallets to choose to log in to the page.
All of those wallets share the same class which is elements__StyledListItem-sc-197zmwo-0 QbTKh, and I wrote the code below in order to try to get their button names (Metamask, Coinbase wallet...), here:
driver = webdriver.Chrome(service=s, options=opt) #execute the chromedriver.exe with the previous conditions
driver.implicitly_wait(10)
driver.get('https://opensea.io/') #go to the opensea main page.
WebDriverWait(driver, 3).until(EC.element_to_be_clickable((By.XPATH, '//*[#id="__next"]/div/div[1]/nav/ul/div[2]/li/button'))) #wait for the wallet button to be enabled for clicking
wallet_button = driver.find_element(By.XPATH, '//*[#id="__next"]/div/div[1]/nav/ul/div[2]/li/button')
wallet_button.click() #click that wallet button
wallet_providers = driver.find_elements(By.CLASS_NAME, "elements__StyledListItem-sc-197zmwo-0 QbTKh") #get the list of wallet providers
for i in wallet_providers:
print(i)
After compiling the code above, I noticed that it didn't print anything, and it was due to the empty array of wallet_providers, which is very weird because I understand that by calling find_elements(By.CLASS_NAME, "the_class_name") it will then return an array containing the elements that share the same class, but it didn't do that in this case.
So, I would appreciate if someone could explain me what did I do wrong? In the end, I just wanted to manage to click on the Metamask button which doesn't always stay at the same location, sometimes it's the first element of that list, sometimes the second...
You are using this CLASS_NAME elements__StyledListItem-sc-197zmwo-0 QbTKh which has space in it.
In Selenium, a class name having space will not be parsed and will throw the error.
The reason why you did not get the error is cause you are using find_elements that will either return a list of web element or nothing.
So how to resolve this?
remove the space and put a . instead to make a CSS_SELECTOR
try this:
wallet_providers = driver.find_elements(By.CSS_SELECTOR, ".elements__StyledListItem-sc-197zmwo-0.QbTKh") #get the list of wallet providers
to be honest we can have better locator than this, cause it seems these values 197zmwo-0.QbTKh are generated dynamically.
I would rather use this CSS:
li[class^='elements__StyledListItem'] span[font-weight]
or this xpath:
//li[starts-with(#class,'elements__StyledListItem')]//descendant::span[#font-weight]
Also, you should print it like this: (this is one way but there are others as well):
Code:
driver.get("https://opensea.io/")
WebDriverWait(driver, 3).until(EC.element_to_be_clickable((By.XPATH, '//*[#id="__next"]/div/div[1]/nav/ul/div[2]/li/button'))) #wait for the wallet button to be enabled for clicking
wallet_button = driver.find_element(By.XPATH, '//*[#id="__next"]/div/div[1]/nav/ul/div[2]/li/button')
wallet_button.click() #click that wallet button
wallet_providers = driver.find_elements(By.CSS_SELECTOR, "li[class^='elements__StyledListItem'] span[font-weight]") #get the list of wallet providers
for i in wallet_providers:
print(i.get_attribute('innerText'))
Console output:
WalletConnect
MetaMask
Coinbase Wallet
Fortmatic
Process finished with exit code 0
The locators you are using are not relative enough, and on my first inspection, I somehow didn't locate them in the DOM. So, refactored code with relative locators to make the code work.
driver.get('https://opensea.io/') #go to the opensea main page.
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, "//*[#title='Wallet']"))).click()
wallets = WebDriverWait(driver, 10).until(EC.visibility_of_all_elements_located((By.XPATH, "//*[#data-testid='WalletSidebar--body']//li")))
for wallet in wallets:
print(wallet.text)
Output:
WalletConnect
MetaMask
Popular
Coinbase Wallet
Fortmatic
Process finished with exit code 0
You use the class name elements__StyledListItem-sc-197zmwo-0 QbTKh which has space in it to find the elements. Actually, in Selenium we can't use the class name to locate an element/elements which have space in it. You can use CSS-Selector instead of the class name and in CSS-Selector you need to replace the spaces of the class with a (.) dot.
OR
You can use the parent class and then tags to point to the desired elements.
div[class='Blockreact__Block-sc-1xf18x6-0.eOSaGo'] > ul > li

I'm unable to locate the element to perform actions on it

I'm facing an issue locating the element on screen when there are no unique identifiers like ID, text etc. As it opens URL, I need to scroll down and click on button - 'Get Started' to proceed!...
Below is my code:
global driver
driver = webdriver.Chrome(ChromeDriverManager().install())
driver.maximize_window()
driver.get("https://My URL")
driver.implicitly_wait(10)
screen = driver.find_element(By.XPATH, '//div[#class="swiper-wrapper"]')
screen.click() (- This step doesnt through any error, i tried using this to scroll down the screen)
element = driver.find_element(By.XPATH, '//span[contains(text(),"Get Started")]')
driver.execute_script("arguments[0].scrollIntoView(true);", element )
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//span[contains(text(),"Get Started")]'))).click()
or
element.click()
Please help me in determining how to locate the element.
enter image description here
In this case you are trying to find span which is inside #shadow-root, Selenium won't be able to click elements inside #shadow-root in easy way. Look here: How to click button inside #shadow-root (closed) using Selenium and Python
But in your case you probably don't need/want to click this specific span, becouse you have custom element ion-button, which has some click event listener attached to it.
You can check XPATH value in browser devtools. If you go to Elements tab, you can right click desired element and choose option Copy>Copy Xpath. Also make sure your element is not inside iframe tag, it could cause problem as well.

How to get next page in silenium?

I am working on selenium in python, I want to scrape all pages, but I am in trouble:
Here is the element I want to click:
I am using the folloing code:
link=driver.find_element_by_link_text ('2')
link.click()
But it give click on another element
Deos there exist another way to get next page?
First of all, sees like your element what you're trying to click overlapped by another one, so you need to wait for its becoming being clickable or other one disappear:
el = WebDriverWait(driver, 15).until(EC.element_to_be_clickable((By.XPATH,//div[#id="pagination_wrapper"]//li[#value="1"])))
or
WebDriverWait(driver, LONG_TIMEOUT
).until_not(EC.presence_of_element_located((By.XPATH,"//div[#class='close_cookie_alert']")))
Here's like you can find all of yours elements:
link1 = driver.find_element_by_xpath('//div[#id="pagination_wrapper"]//li[#value="1"]')
link2 = driver.find_element_by_xpath('//div[#id="pagination_wrapper"]//li[#class="2"]')
link3 = driver.find_element_by_xpath('//div[#id="pagination_wrapper"]//li[contains(text(),"text of the third element")]')
if usual click doesn't work, try to use click via javascript, like that:
driver.execute_script("arguments[0].click();", link1)
or, just move to the next page with:
driver.get('new_page')

Can't get "WebDriver" element data if not "eye-visible" in browser using Selenium and Python

I'm doing a scraping with Selenium in Python. My problem is that after I found all the WebElements, I'm unable to get their info (id, text, etc) if the element is not really VISIBLE in the browser opened with Selenium.
What I mean is:
First image
Second image
As you can see from the first and second images, I have the first 4 "tables" that are "visible" for me and for the code. There are however, other 2 tables (5 & 6 Gettho lucky dip & Sue Specs) that are not "visible" until I drag down the right bar.
Here's what I get when I try to get the element info, without "seeing it" in the page:
Third image
Manually dragging the page to the bottom and therefore making it "visible" to the human eye (and also to the code ???) is the only way I can the data from the WebDriver element I need:
Fourth image
What am I missing ? Why Selenium can't do it in background ? Is there a manner to solve this problem without going up and down the page ?
PS: the page could be any kind of dog race page in http://greyhoundbet.racingpost.com/. Just click City - Time - and then FORM.
Here's part of my code:
# I call this function with the URL and it returns the driver object
def open_main_page(url):
chrome_path = r"c:\chromedriver.exe"
driver = webdriver.Chrome(chrome_path)
driver.get(url)
# Wait for page to load
loading(driver, "//*[#id='showLandingLADB']/h4/p", 0)
element = driver.find_element_by_xpath("//*[#id='showLandingLADB']/h4/p")
element.click()
# Wait for second element to load, after click
loading(driver, "//*[#id='landingLADBStart']", 0)
element = driver.find_element_by_xpath("//*[#id='landingLADBStart']")
element.click()
# Wait for main page to load.
loading(driver, "//*[#id='whRadio']", 0)
return driver
Now I have the browser "driver" which I can use to find the elements I want
url = "http://greyhoundbet.racingpost.com/#card/race_id=1640848&r_date=2018-
09-21&tab=form"
browser = open_main_page(url)
# Find dog names
names = []
text: str
tags = browser.find_elements_by_xpath("//strong")
Now "TAGS" is a list of WebDriver elements as in the figures.
I'm pretty new to this area.
UPDATE:
I've solved the problem with a code workaround.
tags = driver.find_elements_by_tag_name("strong")
for tag in tags:
driver.execute_script("arguments[0].scrollIntoView();", tag)
print(tag.text)
In this manner the browser will move to the element position and it will be able to get its information.
However I still have no idea why with this page in particular I'm not able to read webpages elements that are not visible in the Browser area untill I scroll and literally see them.

Selenium (python): can't switch to iframe (name is dynamically generated)

I'm having problem selecting the iframe and accessing the different elements inside it. The iframe name is dynamically generated (e.g. frame11424758092173 or frame0005809321 or frame32138092173). The problem is that Selenium can't find the iframe no matter what i do....
switching to most recent frame doesn't work:
iframe = driver.find_elements_by_tag_name('iframe')[0]
driver.switch_to_frame(iframe)
Waiting for frame gets a timeout exception:
try:
iframe = WebDriverWait(driver, 5).until(EC.frame_to_be_available_and_switch_to_it(By.TAG_NAME('iframe')))
except:
logger.error(traceback.format_exc())
The following lines of code also times out:
try:
iframe = WebDriverWait(driver, 5).until(EC.presence_of_element_located((By.TAG_NAME, u"iframe")))
driver.switch_to_frame(iframe)
except:
logger.error(traceback.format_exc())
I have also tried iterating through the frames but it can't find it. The returned list is empty
iframes = driver.find_elements_by_tag_name('iframe')
#iframes is empty
really need some help...
Have you tried locating the iframe by its XPath and using the contains method?:
iframe = driver.find_element_by_xpath('//iframe[contains(#name, "frame")]')
driver.switch_to_frame(iframe)
Now you can access elements within the iframe.
To exit the iframe use:
driver.switch_to_default_content()
The contains method lets you get an element by a partial attribute value. Pretty useful for dynamically generated IDs, names, etc. You can search by other attributes as well using XPath. For example, say your iframe element has the attribute value = "3". You could use:
iframe = driver.find_element_by_xpath('//iframe[contains(#name, "frame")][#value = "3"]')
driver.switch_to_frame(iframe)
This approach can be used with any number of attributes as well.
You could also try getting the element by its selector. Keep in mind that this limits what you can do with it:
driver.execute_script('document.querySelector("INSERT SELECTOR HERE").doSomething();')
To get the Selector and/or XPath you're going to want to inpect the element using your browser (Chrome in my case). Right click on the element. Click Inspect. Then right click on the HTML element and click Copy > Copy Xpath or Copy > Copy Selector.
If that doesn't work for me, my last resort is to go the url of the iframe.To get that, you need to right-click on the area of the webpage where the iframe exists and click View Frame Source. It'll then lead you to a new page. The url of that page will be shown in the top of the browser after view-source:. You can then simply navigate to that url:
driver.get('insert url of iframe here')
And now you have access to the elements within the iframe. I do not recommend this approach if you are manipulating elements within the iframe and then exiting the iframe. Your changes will get lost. This will only work if you are scraping info off of that iframe, NOT if you are manipulating the elements within. Finding the iframe element and switching into it is usually better and safer.

Categories