I'm trying to click on element in table that is only seperated by columns and every colum only has one row.
The table looks like this
I tried locating element like this
WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.XPATH, f"//div[#class='srb-ParticipantLabelCentered gl-Market_General-cn1 ' and contains(., '140.0')]/following::span[text() ='2.40']"))).click()
but instead of clicking on row that contains "2.40" and is next to 140.0 it clicks on row 2.40 that is next to 131.0.
It clicks on first element that matches value "2.40".
How could I make it so it clicks on 2.40 that is next to 140.0
It seems you have two similar div elements with same class name and text value.
If so use last() option in xpath to get the last one, which should identify the expected element.
WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.XPATH, "(//div[#class='srb-ParticipantLabelCentered gl-Market_General-cn1 ' and contains(., '140.0')])[last()]/following::span[text() ='2.40']"))).click()
This will work on both cases. If only one element or more than one elements.
Assuming there are only 2 occurrence of 2.40, you can try the below XPath:
(//*[contains(text(),'2.40')])[2]
Explanation: searches all the nodes(//*) which contains text 2.40 and fetches the second occurrence([2])
Related
I am searching for elements that contains a certain string with find_elements, then for each of these elements, I need to ensure that each one contain a span with a certain text, is there a way that I can create a for loop and use the list of elements I got?
today = self.dataBrowser.find_elements(By.XPATH, f'//tr[child::td[child::div[child::strong[text()="{self.date}"]]]]')
I believe you should be able to search within each element of a loop, something like this:
today = self.dataBrowser.find_elements(By.XPATH, f'//tr[child::td[child::div[child::strong[text()="{self.date}"]]]]')
for element in today:
span = element.find_element(By.TAG_NAME, "span")
if span.text == "The text you want to check":
...do something...
Let me know if that works.
Sure, you can.
You can do something like the following:
today = self.dataBrowser.find_elements(By.XPATH, f'//tr[child::td[child::div[child::strong[text()="{self.date}"]]]]')
for element in today:
span = element.find_elements(By.XPATH,'.//span[contains(text(),"the_text_in_span")]')
if !span:
print("current element doesn't contain the desired span with text")
To make sure your code doesn't throw exception in case of no span with desired text found in some element you can use find_elements method. It returns a list of WebElements. In case of no match it will return just an empty list. An empty list is interpreted as a Boolean false in Python while non-empty list is interpreted as a Boolean true.
I know that the find() command finds only the first occurrence and that find_all() finds all of them. Is there a way to find a specific number?
If i want to find only the first two occurrences is there a method for that or does that need to be resolved in a loop?
You can use CSS selectors knowing the child position you need to extract. Let's assume the HTML you have is like this:
<div id="id1">
<span>val1</span>
<span>val2</span>
<span>val2</span>
</div>
Then you can select the first element by the following:
child = div.select('span:nth-child(1)')
Replace 1 by the number you want
If you want to select multiple occurrences, you can concatenate the children like this:
child = div.select('span:nth-child(1)') + div.select('span:nth-child(2)')
to get the first two children
nth-child selector can also get you the odd number of occurrences:
child = div.select('span:nth-child(2n+1)')
where n starts from 0:
n: 0 => 2n+1: 1
n: 1 => 2n+1: 3
..
Edited after addressing the comment, thanks!
If you are looking for first n elements:
As pointed out in comments, you can use find_all to find all elements and then select necessary amount of it with list slices.
soup.find_all(...)[:n] # get first n elements
Or more efficiently, you can use limit parameter of find_all to limit the number of elements you want.
soup.find_all(..., limit = n)
This is more efficient because it doesn't iterate through the whole page. It stops execution after reaching to limit.
Refer to the documentation for more.
If you are looking for the n(th) element:
In this case you can use :nth-child property of css selectors:
soup.select_one('span:nth-child(n)')
Say in a webpage:
Element A #A is here
... #Some code
Element B #B is here
A and B don't have parent child relationship, but they have the same locator.
No elements exist between A and B that have the same locator.
How do I locate B (There are other elements below B that also have the same locator), given that I have located A.
Can you share the part of the HTML where Element A and Element B are present, without it no one can provide a efficient way to locate elements.
having said, when you have multiple elements having the same locator and if you know the order when they are retrieved by the driver, you can use the find_elements method
elements = find_elements_by_css(<your_locator_strategy_>)
assuming you have used css, this will return a list and lets say Element B is the second item, you can do
elements[1].click()
assuming you want to click on the element
I have a list of elements which i retrieve through find_elements_by_xpath
results = driver.find_elements_by_xpath("//*[contains(#class, 'result')]")
Now I want to iterate through all the elements returned and find specific child elements
for element in results:
field1 = element.find_elements_by_xpath("//*[contains(#class, 'field1')]")
My problem is that the context for the xpath selection gets ignored in the iteration so field1 always just returns the first element with the field1 class on the page regardless of the current element
As #Andersson posted the fix is quite simple, all that was needed was the dot at the beginning of the expression:
for element in results:
field1 = element.find_elements_by_xpath(".//*[contains(#class, 'field1')]")
It's easier to use css selectors (less typing) and find all the elements at once:
for element in driver.find_elements_by_css_selector(".result .field1")
field1 = element
I have a query for a one of my tests that returns 2 results.
Specifically the 3rd level of an outline found using
query = html("ul ol ul")
How do I select the first or second unordered list?
query[0]
decays to a HTMLElement
list(query.items())[0]
or
query.items().next() #(in case of the first element)
is there any better way that I can't see?
note:
query = html("ul ol ul :first")
gets the first element of each list not the first list.
From the PyQuery documentation on traversing, you should be able to select the first unordered list by using:
query('ul').eq(0)
Thus the second unordered list can be obtained by using:
query('ul').eq(1)
In jQuery one would use
html("ul ol ul").first()
.first() - Reduce the set of matched elements to the first in the set.
or
html("ul ol ul").eq(0)
.eq() - Reduce the set of matched elements to the one at the specified index.