I'm trying click on elements all elements with the link text edit by:
for i in browser.find_elements_by_link_text('edit'):
browser.find_element_by_link_text('edit').click()
Now, this obviously only clicks on the first element found i times. I would usually try to use XPath/CSS Selector and loop using an index as you do with tables to get the next element, but these XPaths are identical other than having different ID's, so I don't know how to click on all elements in this case.
It's Reddit comments I try to edit:
With the XPaths:
//*[#id="e9sczhm"]/div[2]/div[2]/a[5]
//*[#id="e9s7x4j"]/div[2]/div[2]/a[5]
it should be i.click()
for i in browser.find_elements_by_link_text('edit'):
i.click()
for i in browser.find_elements_by_link_text('edit'):
browser.find_element_by_link_text('edit').click()
to
from selenium.webdriver.common.keys import Keys
# Find just one element
browser.find_element_by_link_text('edit').click() #<<<--- Starting point (cursor current position)
scheme = (Keys.TAB,Keys.Enter,Keys.TAB,Keys.TAB,Keys.Enter)
for i in scheme : browser.sendKeys(i)
[Starting pos/elem][TAB][HIT][Element-1][TAB][Element-2][TAB][Element-3][TAB][HIT][Element-4]
just hit first and fourth element !
Related
I ve run into a problem with my web automation. I have a drop down menu with two clickable Options. the id (picture ref.) changes every time that I load onto the page due to the element being created manually each time beforehand.
The "-1" and "-0" definition stays the same tho. Is there any possible way to get xpath to only look onto the "-1" and "-0" in the id?
Current code:
folder = driver.find_element_by_xpath("//ng-select[#placeholder='Choose Key Set..']")
folder.click()
element_present = EC.presence_of_element_located((By.XPATH, "//div[#id='a0cb2db88cfe-1']"))
WebDriverWait(driver, timeout).until(element_present)
folder = driver.find_element_by_xpath("//div[#id='a0cb2db88cfe-1']")
folder.click()
Thanks in advance guys!
I still haven't figured out the proper way to locate the html element, but here is my solution over an offset click:
actions.move_to_element_with_offset(driver.find_element_by_xpath("//ng-select[#placeholder='Choose Key Set..']"), 0,0)
actions.move_by_offset(100, 81).click().perform()
Since it changes only a part of the id value, you can try to locate elements by constant value part.
Just try to use these CSS ones:
"[role='option'] [id*='-0']"
"[role='option'] [id*='-1']"
I hope below code should work, But I have not tried yet.
for -0 it should be - //div[1][#class='ng-option ng-star-inserted']
for -1 it should be - //div[2][#class='ng-option ng-star-inserted']
Please try this and let me know the status
I'm trying to find an element with Xpath but it changes like so:
//*[#id="emailwrapper"]/div/div/table[1]/tbody/tr/td[2]/a
//*[#id="emailwrapper"]/div/div/table[2]/tbody/tr/td[2]/a
//*[#id="emailwrapper"]/div/div/table[3]/tbody/tr/td[2]/a
//*[#id="emailwrapper"]/div/div/table[4]/tbody/tr/td[2]/a
//*[#id="emailwrapper"]/div/div/table[5]/tbody/tr/td[2]/a
//*[#id="emailwrapper"]/div/div/table[6]/tbody/tr/td[2]/a
My current assumption is that the table I'm looking for will always be that last one in the table array but I want to confirm this by counting the number of tables in the second div. Does anyone know how to do this?
Simple solution is using the below xpath.
//*[#id='emailwrapper']/div/div/table
Your code should be
lastTable = len(driver.find_elements_by_xpath("//*[#id='emailwrapper']/div/div/table"))-1
print lastTable
Assuming that there would be at least one element that matches the xpath of '//*[#id="emailwrapper"]/div/div/table', you can simply do:
driver.find_elements_by_xpath('//*[#id="emailwrapper"]/div/div/table')
It will return a list, or raise NoSuchElementException if none are found.
Exact same results but written differently:
from selenium.webdriver.common.by import By
driver.find_elements(By.XPATH, '//*[#id="emailwrapper"]/div/div/table')
After which you can do a len() on the list for how many elements
I am trying to find an element via index number in Python Selenium. The element has the following xpath;
/html/body/div[4]/div[1]/div[3]/div/div[2]/div/div/div/div/div/div/div[1]/div[2]/div/div/div/div[2]/div/div/div/div[2]/div/div[3]/div[1]/div/div/div[9]
I have also included the below image of the element I want.
Element I want:
I am able to find the element via normal xpath, but I want to do this in a loop 10,000 times.
I was wondering if there is a way to find the element using its index number. Here is the code I have for it, for just index value 5, but it is not working.
Fund_click.append (driver.find_element_by_xpath("//div[#id='app']/div[1]/div[3]/div/div[2]/div/div/div/div/div/div/div[1]/div[2]/div/div/div/div[2]/div/div/div/div[2]/div/div[3]/div[1]/div/div/[div/#index='5']"))
Based on your snapshot you can try this way to see if you get any difference.I have consider to attribute claas & Index.
for i in range(1,10000):
print(driver.find_element_by_xpath("//div[#class='tg-row tg-level-0 tg-even tg-focused']" and "//div[#index='" + str(i) + "']"))
If you want use index number then just try below code.But i am not sure you are able to identify elements only using index so better to try 1st option above.
for i in range(1,10000):
print(driver.find_element_by_xpath("//div[#index='" + str(i) + "']"))
Try the below xpath to confirm, you can able to locate all the elements or not? If the page is loaded fully then you should see all the matches (Inspect manually).
//div[#id='app']//div[contains(#class, 'tg-row')]
You can fetch and store all the elements by using driver.find_elements_by_xpath() method with the help of above xpath locator so that you can avoid iterating through element index each time. Try the below code :
# Fetching and storing all the matches
elements = driver.find_elements_by_xpath("//div[#id='app']//div[contains(#class, 'tg-row')]");
for element in elements:
# printing the index numbers to confirm it has fetched or not?
print(element.get_attribute('index'))
Try to give some delay before fetching if the element is present and if you get NoSuchElementException or check for frame/iframe.
If all the elements are not visible, you many need to perform some scroll operations.
If the above mentioned method doesn't work then you can try the below xpaths to identify that element based on index/matching index numbers and you can proceed with your approach.
(//div[#id='app']//div[contains(#class, 'tg-row')])[matching index]
or
//div[#id='app']//div[contains(#class, 'tg-row') and #index='provide
index number here']
or
//div[contains(#class, 'tg-row') and #index='provide index number
here']
or
//div[#index='provide index number here']
or
(//div[contains(#class, 'tg-row')])[provide matching index number
here]
I hope it helps...
I'm working on making somewhat of a site map/tree (using anytree) and in order to do so, I need Selenium to find particular elements on a page (representing categories) and then systematically click through these elements, looking for new categories on each new page until we hit no more categories, ie. all leaves and the tree is populated.
I have much of this already written. My issue arises when trying to iterate through my elements list. I currently try to populate the tree depth-first, going down to the leaves and then popping back up to the original page to continue the same thing with the next element in the list. This, however, is resulting in a Stale element reference error because my page reloads. What is a workaround to this? Can I somehow open the new links in a new window so that the old page is preserved? The only fixes I have found for that exception are to neatly catch it, but that doesn't help me.
Here is my code so far (the issue lies in the for loop):
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from anytree import Node, RenderTree
def findnodes(driver) :
driver.implicitly_wait(5)
try:
nodes = driver.find_elements_by_css_selector('h3.ng-binding')
except:
nodes = []
return nodes
def populateTree(driver, par) :
url = driver.current_url
pages = findnodes(driver)
if len(pages)>0 :
for page in pages:
print(page.text)
Node(page.text, parent=par)
page.click()
populateTree(driver, page.text)
driver.get(url)
driver = webdriver.Chrome()
#Get starting page
main ='http://www.example.com'
root = Node(main)
driver.get(main)
populateTree(driver, root)
for pre, fill, node in RenderTree(root):
print("%s%s" % (pre, node.name))
I haven't worked in python but have worked on java/selenium. But,I can give you the idea to overcome staleness.
Generally we will be getting the Stale Exception if the element attributes or something is changed after initiating the webelement. For example, in some cases if user tries to click on the same element on the same page but after page refresh, gets staleelement exception.
To overcome this, we can create the fresh webelement in case if the page is changed or refreshed. Below code can give you some idea.(It's in java but the concept will be same)
Example:
webElement element = driver.findElement(by.xpath("//*[#id='StackOverflow']"));
element.click();
//page is refreshed
element.click();//This will obviously throw stale exception
To overcome this, we can store the xpath in some string and use it create a fresh webelement as we go.
String xpath = "//*[#id='StackOverflow']";
driver.findElement(by.xpath(xpath)).click();
//page has been refreshed. Now create a new element and work on it
driver.findElement(by.xpath(xpath)).click(); //This works
Hope this helps you.
xpath variable is not suppose to be star, it an xpath to desired elements. Stale exception appears, because we click something in the browser. That requires to find all the elements each time you click. So in each loop we find all the elements driver.find_elements_by_xpath(xpath). We get a list of elements. But then we need only one of them. Therefore we take element at specific index represented idx which will range from 0 to the number of elements.
xpath = '*'
for idx, _ in enumerate(range(len(driver.find_elements_by_xpath(xpath)))):
element = driver.find_elements_by_xpath(xpath)[idx]
element.click()
I'm new to Python.
I met a BIG problem in Python!!
I visit a website and put about 200 options from a dropdownlist in a array.
I want to click every options in the array and click javascript button to submit.
Take something I want from that page and back to previous page click another option.
Do those actions about 200 times in a for loop.
Here is the code:
for option in arrName:
if count > 0:
option.click()
string = u'Something'
link2 = browser.find_element_by_link_text(string.encode('utf8'))
link2.click()
//"do something I want"
browser.back()
count = count +1
In this code, I don't want to use first option.
PROBLEM comes,after the program click the second option, click the link2, and browser.back(), it answer me:
` StaleElementReferenceException: Message: stale element reference: element
is not attached to the page document
that means the options in array disappear?
How should I use the options in the array when the browser.back() in for loop?
Thanks
Yes, this is happening because of DOM refresh. You cannot simply iterate over an array and click back and forth. Best option is to find the element in runtime and then click. Avoid option.click() instead, find the next element with find_element. If you are not sure how to accomplish that then please provide the html