I am trying to write a Selenium test but the issue is I have learned that the page is generated with PrimeFaces, thus the element IDs randomly change from time to time. Not using IDs is not very reliable. Is there anything I can do?
Not having meaningful stable IDs is not a problem, as there are always alternative ways to locate elements on a page. Just to name a few options:
partial id matches with XPath or CSS, e.g.:
# contains
driver.find_element_by_css_selector("span[id*=customer]")
driver.find_element_by_xpath("//span[contains(#id, 'customer')]")
# starts with
driver.find_element_by_css_selector("span[id^=customer]")
driver.find_element_by_xpath("//span[starts-with(#id, 'customer')]")
# ends with
driver.find_element_by_css_selector("span[id$=customer]")
classes which refer/tell some valuable information about the data types ("data-oriented locators"):
driver.find_element_by_css_selector(".price")
driver.find_element_by_class_name("price")
going sideways from a label:
# <label>Price</label><span id="65123safg12">10.00</span>
driver.find_element_by_xpath("//label[.='Price']/following-sibling::span")
links by link text or partial link text:
driver.find_element_by_link_text("Information")
driver.find_element_by_partial_link_text("more")
And, you can, of course, get creative and combine them. There are more:
Locating Elements
There is also this relevant thread which goes over best practices when choosing a method to locate an element on a page:
What makes a good selenium locator?
Related
I am trying to do something i can't find any help on. I want to be able to locate the xpath or other 'address' information in of a particular element for later use by selenium. I have text for the element and can find it using the selenium By.LINK.TEXT methodology. However, i am writing an application where speed is critical so i want to pre-find the element, store the xpath (for later use) and then use the By.XPATH methodology. In general finding an element using the BY.text construction takes .5 seconds whereas the xpath lookup takes on 10 - 20% of that time. I tried the code below but i get an error on getpath (WebElement object has no attribute getpath)
Thanks for any help
temp = br.find_element(By.LINK_TEXT, (str(day_to_book)))
print(temp.getpath())
The Selenium WebElement object received by driver.find_element(ByLocator) is already a reference to the actual physical web element on the page. In other words, the WebElement object is an address of the actual web element you asking about.
There is no way to get a By locator of an already found WebElement
So, in your particular example temp = br.find_element(By.LINK_TEXT, (str(day_to_book))) the temp is an address of the element you can keep for future use (until the page is changed / refreshed)
I'm relatively new to using python & selenium. I'm trying to access NexisUni to automate a loop of searches. But, once I'm in NexisUni, I struggle to locate elements -- I get a "no such element" exception. I want to locate the search bar and input my search terms.
I've read about the fact that an iFrame might be present, and I need to switch frames. But, I don't see any frames! Is there a way to identify frames easily -- and could a frame be present without the word "frame" in the HTML? I've also tried loading the page longer and having the driver wait, to no avail.
The HTML code is below, the grey part is the piece I'd like to select:
HTML Code
The code I'm writing to identify it is:
SearchBar = driver.find_element_by_xpath('/html/body/main/div/div[13]/div[2]/div[1]/header/div[3]/section/span[2]/span/textarea').send_keys('search text')
I've also tried these two options:
find_element_by_class_name, find_element_by_id
WebDriverWait(driver,10).until(EC.presence_of_element_located)
... Any suggestions would be appreciated!
I'm trying to get the number in all the <b> tags on this website. I want every single "qid" (question id) so I think I have to use qids = driver.find_elements_by_tag_name("b"), and based on other questions I've found I also need to implement a for loop and then print(qids.get_attribute("text")) but my code can't even seem to find elements with the <b> since I keep on getting the NoSuchElementException. The appearance of the website leads me to believe the content I'm looking for is within an iframe but I'm not sure if that affects the functionality of my code.
Here's a screencap of the website for reference
The html isn't of much use because the tag is its only defining trait:
<b>13570etc...</b>
Any help is much appreciated.
You could try searching by XPath:
driver.find_elements_by_xpath("//b")
Where // means "find all matching elements regardless of where they are in the document/current scope." Check out the XPath syntax here and mess around with a few different options.
For each vendor in an ERP system (total # of vendors = 800+), I am collecting its data and exporting this information as a pdf file. I used Selenium with Python, created a class called Scraper, and defined multiple functions to automate this task. The function, gather_vendors, is responsible for scraping and does this by extracting text values from tag elements.
Every vendor has a section called EFT Manager. EFT Manager has 9 rows I am extracting from:
For #2 and #3, both have string values (crossed out confidential info). But, #3 returns null. I don’t understand why #3 onward returns null when there are text values to be extracted.
The format of code for each element is the same.
I tried switching frames but that did not work. I tried to scrape from edit mode and that didn’t work as well. I was curious if anyone ever encountered a similar situation. It seems as though no matter what I do I can’t scrape certain values… I’d appreciate any advice or insight into how I should proceed.
Thank you.
Why not try to use
find_element_by_class_name("panelList").find_elements_by_tag_name('li')
To collect all of the li elements. And using li.text to retrieve their text values. Its hard to tell what your actual output is besides you saying "returns null"
Try to use visibility_of_element_located instead of presence_of_element_located
Try to get textContent with javascript fo element Given a (python) selenium WebElement can I get the innerText?
element = driver.find_element_by_id('txtTemp_creditor_agent_bic')
text= driver.execute_script("return attributes[0].textContent", element)
The following is what worked for me:
Get rid of the try/except blocks.
Find elements via ID's (not xpath).
That allowed me to extract text from elements I couldn't extract from before.
You should change the way of extracting the elements on web page to ID's, since all the the aspects have different id provided. If you want to use xpaths, then you should try the JavaScript function to find them.
E.g.
//span[text()='Bank Name']
I am trying to make a basic form filler using Selenium on nike.com. I have completed most of it but I am having trouble clicking on the button to select the gender. I have attempted to use many examples of find_element_by_xxxxx code bit none of it has worked. Finding elements by id and xpath haven't come to much either. A typical error i get is Message: no such element: Unable to locate element. I am very new to coding so I could very easily have made an error, but any idea on how you guys would solve it would be much appreciated
That XPATH is very long and you can simplify.
By the looks of it , I would guess those Ids are changing every time there is a new session.
A more straightforward XPATH selector could be...
"//span[text() = 'Male']"
// specifies to search the entire document
span specifies the type of element to search for
text() specifies text that needs to be inside the element
(this will give you the span element but it should still work)
or
"//span[text() = 'Male']/parent::li//input"
(this will give you the actual input button)
Also , like Ollin Boer Bohan suggested, look into using waits before performing actions on your elements.
#cavan answer is correct, Also you can use xpaht like this too
//input[#type='button']//following::span[text()='Male']
Here we can use following to locate the male, same you can do for female button