There are multiple buttons on my page containing similar href. They only differ with id_invoices. I want to click one button on page using xpath and href which looks like:
href="/pl/payment/getinvoice/?id_invoices=461"
I can select all buttons using:
invoices = driver.find_elements_by_xpath("//a[contains(#href, '/payment/getinvoice/')]")
but I need to select only button with highest id_invoices. Can it be done? :)
What you can do is:
hrefList = driver.find_elements_by_xpath("//a[contains(#href, '/payment/getinvoice/')]/#href")
for i in hrefList:
hrefList[i]=hrefList[i].split("id_invoices=")[-1]
max = max(hrefList)
driver.find_elements_by_xpath("//a[contains(#href, '/payment/getinvoice/?id_invoices="+str(max))+"'"+"]").click()
I don't know much about python so giving you a direction/algorithm to achieve same
Using getAttribute('#href');
You will get strings of URLs
You need to split all element after getText() you get in invoice List.
Split by= and pick up the last array value.
Now you need to typecast string to int, as last value after = will be a number
Now you just need to pick the highest value.
Since you have an XPath that returns all the desired elements, you just need to grab the href attribute from each one, split the href by '=' to get the id (2nd part of string), find the largest id, and then use the id to find the element you want and click on it.
invoices = driver.find_elements_by_xpath("//a[contains(#href, '/payment/getinvoice/')]")
ids = []
for invoice in invoices
ids.append(invoice.get_attribute("href").split('=')[2])
results = list(map(int, ids)) // you can't do max on a list of string, you won't get the right answer
id = max(results)
driver.find_element_by_xpath("//a[#href='/pl/payment/getinvoice/?id_invoices=" + id + "']").click
Related
so i tried to gather all of ids from site and "extract" numbers from them
Its looking like that on that site:
<div class="market_listing_row number_490159191836499" id="number_490159191836499">
<div class="market_listing_row number_490159191836499" id="number_490159191836499">
<div class="market_listing_row number_490159170836499" id="number_490159170836499">
So i located all of them using that xpath and to be sure printed lenght of that list(and all of elements in it while testing but deleted this part of code) so i know for sure its
working and collecting all of 50 different elements from site.
elements = driver.find_elements_by_xpath('//*[starts-with(#id, "number_") and not(contains(#id, "_name")) ]')
print("List 2 lenght is:", len(elements))
But when i try to make list of numbers without "number_ " that id starts with i have problem. List called id that i create with get_attribute("id") is just one id(number_490159170836499 for example) repeated 22 times(its lenght of that id so it has to do something with it). list_of_ids is working as intended and i get 490159170836499 as result but its only one element(i guess its because theres only that number only repeated). Thats the code that i used:
for x in elements:
id = x.get_attribute("id")
list_of_ids = re.findall("\d+", id)
I also used this code to print all of ids on site so i know for sure that elements list have all of them in it and that get_attribute is working.
for ii in elements:
print(ii.get_attribute("id"))
To be clear I did import re
Another guess:
import re
ids = []
for x in elements:
id = x.get_attribute("id")
ids.append(re.search("\d+",id)[0])
print(ids)
You can use split method as well.
for x in elements:
id = x.get_attribute("id")
a =id.split("_")[1]
print(a)
I'm trying to iterate over a number of elements returned by matching class names, which I have stored in an array users. The print(len(users)) outputs as 12, which is accurately correct as to how many returned there should be. This is my code:
def follow():
time.sleep(2)
# iterate here
users = []
users = browser.find_elements_by_class_name('wo9IH')
print(len(users))
for user in users:
user_button = browser.find_element_by_css_selector('li.wo9IH div.Pkbci').click()
#user_button = browser.find_element_by_xpath('//div[#class="Pkbci"]/button').click()
However currently, only index [0] is being .click()'d and the program is terminating after this first click. What would be the problem as to why the index being iterated isn't incrementing?
resource: image - red shows what's being iterated through and blue is each button being .click()'d
try this,
You can directly make array of buttons rather than li array,
Go click all buttons contains text as Follow,
simple,
browser.maximize_window()
users = []
users = browser.find_elements_by_xpath('*//button[text()='Follow']')
print(len(users)) # check it must be 12
for user in users:
browser.execute_script("arguments[0].click()", user)
# user.click() Go click all buttons
Find all your css_selector elements as a list and then iterate that list to perform .click()
yourList = browser.find_elements_by_css_selector('w0o9IH div.Pkbci')
users = browser.find_elements_by_class_name('wo9IH') returns a list of selenium.webdriver.remote.webelement.WebElement instances that can also be transversed.
In your implementation of the iteration, the above fact about the items in the list is overlooked and the entire page is search by transversing the page source from the WebDriver instance (i.e. browser.find_element_by_css_selector).
Here is how to go about getting the button in the matched WebElements:
for user_web_element in users:
# The next line given that there is only a single <button>
# in the screenshot for the matched WebElements.
user_button = user_web_element.find_element_by_tag_name('button')
user_button.click()
Until now I used a for cycle to get all the elements on a page in a certain path with this script:
for username in range(range_for_like):
link_username_like = "//article/div[2]/div[2]/ul/div/li[" + str(num) + "]/div/div[1]/div/div[1]/a[contains(#class, 'FPmhX notranslate zsYNt ')]"
user = browser.find_element_by_xpath(link_username_like).get_attribute("title")
num += 1
sleep(0.3)
But sometimes my cpu will exceed 100%, which is not ideal.
My solution was to find all the elements in one line using find_elements_by_xpath but in doing so, I can't figure out how to get all the "title" attributes.
I know that the path changes for every title, //article/div[2]/div[2]/ul/div/li[" + str(num) + "]/div/div[1]/div/div[1]/a that's why I kept increasing the num variable, but how can I use this tecnique without a cycle for?
What's the most efficient way in term of performance to get all the attributes? I don't mind if it does take also 2 minutes or more
Here how you can get all the people that like your photo by xpath:
//div[text()='Likes']/..//a[#title]
Code below get first 12 liker:
likes = browser.find_elements_by_xpath("//div[text()='Likes']/..//a[#title]")
for like in likes:
user = like.get_attribute("title")
To get all likes you have to scroll, for that you can get total likes you have and then scroll until all likes will be loaded. To get total likes you can use //a[contains(.,'likes')]/span xpath and convert it to integer.
To scroll use javascript .scrollIntoView() to last like, final code would look like:
totalLikes = int(browser.find_element_by_xpath("//a[contains(.,'likes')]/span").text)
browser.find_element_by_xpath("//a[contains(.,'likes')]/span").click()
while true:
likes=browser.find_elements_by_xpath("//div[text()='Likes']/..//a[#title]")
likesLen = len(likes)
if (likesLen == totalLikes - 1)
break
browser.execute_script("arguments[0].scrollIntoView()", likes.get(likesLen-1))
for like in likes:
user = like.get_attribute("title")
How it works:
With //div[text()='Likes'] I found unique div with window contains likes. Then to get all likes that is li I go to parent div with /.. selector and then get all a with title attribute. Because all likes not loading immediately you have to scroll down. For that I get total likes amount before click to likes. Than I scroll to last like (a[#title]) to force instagram to load some data until total likes I got not equals to list of likes. When scroll completes I just iterate throw all likes in list I got inside while loop and get titles.
I have this code
lst = ["Appearence","Logotype", "Catalog", "Product Groups", "Option Groups","Manufacturers","Suppliers",
"Delivery Statuses","Sold Out Statuses", "Quantity Units", "CSV Import/Export", "Countries","Currencies","Customers"]
for item in lst:
wd.find_element_by_link_text(item).click()
assert wd.title != None
I not want to write list by hand.
I want to receive the list - lst directly from the browser.
I use
m = wd.find_elements_by_css_selector('li[id=app-]')
print(m[0].text)
Appearence
I don't know how to transfer the list to a cycle
look this picture screen browser
Please help me to understand how to use the list and to transfer it to a cycle
In your example variable m will be a list of WebElements you get the length of it and iterate CSS pseudo selector :nth-child() with a range:
m = wd.get_elements_by_css_selector('li#app-')
for elem in range(1, len(m)+1):
wd.get_element_by_css_selector('li#app-:nth-child({})'.format(elem)).click()
assert wd.title is not None
In the for loop it will iterate over a range of integers starting with 1 and ending with the length of the element list (+1 because is not inclusive), the we will click the nth-child of the selector using the iterating number, .format(elem) will replace th {} appearance in the string for the elem variable, in this case the integer iteration.
How can I increment the Xpath variable value in a loop in python for a selenium webdriver script ?
search_result1 = sel.find_element_by_xpath("//a[not((//div[contains(#class,'s')]//div[contains(#class,'kv')]//cite)[1])]|((//div[contains(#class,'s')]//div[contains(#class,'kv')]//cite)[1])").text
search_result2 = sel.find_element_by_xpath("//a[not((//div[contains(#class,'s')]//div[contains(#class,'kv')]//cite)[2])]|((//div[contains(#class,'s')]//div[contains(#class,'kv')]//cite)[2])").text
search_result3 = sel.find_element_by_xpath("//a[not((//div[contains(#class,'s')]//div[contains(#class,'kv')]//cite)[3])]|((//div[contains(#class,'s')]//div[contains(#class,'kv')]//cite)[3])").text
why dont you create a list for storing search results similar to
search_results=[]
for i in range(1,11) #I am assuming 10 results in a page so you can set your own range
result=sel.find_element_by_xpath("//a[not((//div[contains(#class,'s')]//div[contains(#class,'kv')]//cite)[%s])]|((//div[contains(#class,'s')]//div[contains(#class,'kv')]//cite)[%s])"%(i,i)).text
search_results.append(result)
this sample code will create list of 10 values of results. you can get idea from this code to write your own. its just matter of automating task.
so
search_results[0] will give you first search result
search_results[1] will give you second search results
...
...
search_results[9] will give you 10th search result
#Alok Singh Mahor, I don't like hardcoding ranges. Guess, better approach is to iterate through the list of webelements:
search_results=[]
result_elements = sel.find_elements_by_xpath("//not/indexed/xpath/for/any/search/result")
for element in result_elements:
search_result = element.text
search_results.append(search_result)