Selenium - Iterating through groups of elements - Python - python

I'm trying to iterate over a number of elements returned by matching class names, which I have stored in an array users. The print(len(users)) outputs as 12, which is accurately correct as to how many returned there should be. This is my code:
def follow():
time.sleep(2)
# iterate here
users = []
users = browser.find_elements_by_class_name('wo9IH')
print(len(users))
for user in users:
user_button = browser.find_element_by_css_selector('li.wo9IH div.Pkbci').click()
#user_button = browser.find_element_by_xpath('//div[#class="Pkbci"]/button').click()
However currently, only index [0] is being .click()'d and the program is terminating after this first click. What would be the problem as to why the index being iterated isn't incrementing?
resource: image - red shows what's being iterated through and blue is each button being .click()'d

try this,
You can directly make array of buttons rather than li array,
Go click all buttons contains text as Follow,
simple,
browser.maximize_window()
users = []
users = browser.find_elements_by_xpath('*//button[text()='Follow']')
print(len(users)) # check it must be 12
for user in users:
browser.execute_script("arguments[0].click()", user)
# user.click() Go click all buttons

Find all your css_selector elements as a list and then iterate that list to perform .click()
yourList = browser.find_elements_by_css_selector('w0o9IH div.Pkbci')

users = browser.find_elements_by_class_name('wo9IH') returns a list of selenium.webdriver.remote.webelement.WebElement instances that can also be transversed.
In your implementation of the iteration, the above fact about the items in the list is overlooked and the entire page is search by transversing the page source from the WebDriver instance (i.e. browser.find_element_by_css_selector).
Here is how to go about getting the button in the matched WebElements:
for user_web_element in users:
# The next line given that there is only a single <button>
# in the screenshot for the matched WebElements.
user_button = user_web_element.find_element_by_tag_name('button')
user_button.click()

Related

Trying to Scroll inside a div with selenium, scroller function only scrolls up to a certain amount and then just stops

I want to get a list of all the list items which are present inside a div with a scroller. They are not loaded at once upon loading the page, rather the items are loaded dynamically as the user scrolls down (until there are no elements left). So, this is the scroller script which I tried to implement:
def scroller():
userList = None
prev = 0
while True:
time.sleep(5)
userList = WebDriverWait(browser, 50).until(
EC.presence_of_all_elements_located(( By.CLASS_NAME, '<class of list item>' ))
)
n = len(userList)
if n == prev:
break
prev = n
#getting the last element in the list in the view
userList[-1].location_once_scrolled_into_view
This function scrolls the list upto a certain length, but doesn't go to the full length of the elements (not even half). Can someone please suggest a better way to do this?
Thank you

seachable qcombobox only giving me 1 result

I'm trying to make a searchable combo box. Right now it searches it, but will only give me 1 result and I have to type in the whole item in the list for it to edit the list to only include the resulted item. I'm trying to get it where if I have multiple results in the editable but not insertable combo box, that it will give me every result in the combo box, even if I only type in a partial match. For example if I type in "200" it will give me all the items that contain anywhere in it "200" and then afterwards if I type in "100" it will give me a whole different list of items to choose from but the list now contains "100" anywhere in the items instead of the previous "200".
Sample of my list
self.Part = ["200019359", "300000272", "300000275", "200018792", "10024769", "10015919", "102000765"]
My function
def PartNumber(self):
if self.PartNum.currentText() != "" :
results = [s for s in self.Part if self.PartNum.currentText() in s]
self.PartNum.clear()
self.PartNum.addItems(results)
else:
self.PartNum.clear()
self.PartNum.addItems(self.Part)
Connecting it to the combo box
self.PartNum.activated.connect(self.PartNumber)

How do I click on 2nd html element which is a duplicate of first element using selenium and python?

I want to click on an element that is copied throughout the website (it is a button), but how do I click on lets say the second button, not the first.
Here is the code of the button I want to click:
SHOP NOW
However, the issue is that sometimes it may greyed out if the item is not in stock so I don't want to click it
As a result, here is all of my code:
def mainclick(website):
while True:
time.sleep(1)
price_saved = [i.text.replace('$', "").replace(',', '') for i in driver.find_elements_by_css_selector('[itemprop=youSave]')]
print(price_saved)
for g in range(len(price_saved)):
a = g + 1
if float(price_saved[g]) > 200:
try:
driver.find_element_by_link_text("SHOP NOW")[a].click()
time.sleep(3)
try:
driver.find_element_by_id("addToCartButtonTop").click()
driver.execute_script("window.history.go(-1)")
except:
driver.execute_script("window.history.go(-1)")
except:
print("couldn't click")
pass
print(a)
driver.find_element_by_link_text("Next Page").click()
print("all pages done")
# starts time
start_time = time.time()
mainweb = "https://www.lenovo.com/us/en/outletus/laptops/c/LAPTOPS?q=%3Aprice-asc%3AfacetSys-Memory%3A16+GB%3AfacetSys-Processor%3AIntel%C2%AE+Core%E2%84%A2+i7%3AfacetSys-Processor%3AIntel%C2%AE+Core%E2%84%A2+i5%3AfacetSys-Memory%3A8+GB&uq=&text=#"
driver.get(mainweb)
mainclick(mainweb)
I tried using [a] to click on a certain one but it doesn't seem to work. Also, the href might change of the shop now button based on the product.
You can collect the elements using .find_elements*.
elements = driver.find_elements_by_link_text('insert_value_here')
elements[0].click()
The above example to click first elements.
This index [0], replace with what you want.
If you are sure that everytime you want to click on 2nd button
try using below xpath,
(//*[#class='button-called-out button-full facetedResults-cta'])[2]
If, count of buttons is not same ( may be greyed out)
try using findelements
List button=driver.findElements(By.xpath("//*[#class='button-called-out button-full facetedResults-cta']"));
button.size();
Append the button.size() to the xpath in the place of '2' dynamically, you can click on the second/first not greyed button
You can use XPath with an index a:
driver.find_element_by_xpath("(//a[.='SHOP NOW'])[{}]".format(a))
Note that the first element has index 1.

Get all the attributes "title" with selenium in python

Until now I used a for cycle to get all the elements on a page in a certain path with this script:
for username in range(range_for_like):
link_username_like = "//article/div[2]/div[2]/ul/div/li[" + str(num) + "]/div/div[1]/div/div[1]/a[contains(#class, 'FPmhX notranslate zsYNt ')]"
user = browser.find_element_by_xpath(link_username_like).get_attribute("title")
num += 1
sleep(0.3)
But sometimes my cpu will exceed 100%, which is not ideal.
My solution was to find all the elements in one line using find_elements_by_xpath but in doing so, I can't figure out how to get all the "title" attributes.
I know that the path changes for every title, //article/div[2]/div[2]/ul/div/li[" + str(num) + "]/div/div[1]/div/div[1]/a that's why I kept increasing the num variable, but how can I use this tecnique without a cycle for?
What's the most efficient way in term of performance to get all the attributes? I don't mind if it does take also 2 minutes or more
Here how you can get all the people that like your photo by xpath:
//div[text()='Likes']/..//a[#title]
Code below get first 12 liker:
likes = browser.find_elements_by_xpath("//div[text()='Likes']/..//a[#title]")
for like in likes:
user = like.get_attribute("title")
To get all likes you have to scroll, for that you can get total likes you have and then scroll until all likes will be loaded. To get total likes you can use //a[contains(.,'likes')]/span xpath and convert it to integer.
To scroll use javascript .scrollIntoView() to last like, final code would look like:
totalLikes = int(browser.find_element_by_xpath("//a[contains(.,'likes')]/span").text)
browser.find_element_by_xpath("//a[contains(.,'likes')]/span").click()
while true:
likes=browser.find_elements_by_xpath("//div[text()='Likes']/..//a[#title]")
likesLen = len(likes)
if (likesLen == totalLikes - 1)
break
browser.execute_script("arguments[0].scrollIntoView()", likes.get(likesLen-1))
for like in likes:
user = like.get_attribute("title")
How it works:
With //div[text()='Likes'] I found unique div with window contains likes. Then to get all likes that is li I go to parent div with /.. selector and then get all a with title attribute. Because all likes not loading immediately you have to scroll down. For that I get total likes amount before click to likes. Than I scroll to last like (a[#title]) to force instagram to load some data until total likes I got not equals to list of likes. When scroll completes I just iterate throw all likes in list I got inside while loop and get titles.

Select button by highest number in xpath

There are multiple buttons on my page containing similar href. They only differ with id_invoices. I want to click one button on page using xpath and href which looks like:
href="/pl/payment/getinvoice/?id_invoices=461"
I can select all buttons using:
invoices = driver.find_elements_by_xpath("//a[contains(#href, '/payment/getinvoice/')]")
but I need to select only button with highest id_invoices. Can it be done? :)
What you can do is:
hrefList = driver.find_elements_by_xpath("//a[contains(#href, '/payment/getinvoice/')]/#href")
for i in hrefList:
hrefList[i]=hrefList[i].split("id_invoices=")[-1]
max = max(hrefList)
driver.find_elements_by_xpath("//a[contains(#href, '/payment/getinvoice/?id_invoices="+str(max))+"'"+"]").click()
I don't know much about python so giving you a direction/algorithm to achieve same
Using getAttribute('#href');
You will get strings of URLs
You need to split all element after getText() you get in invoice List.
Split by= and pick up the last array value.
Now you need to typecast string to int, as last value after = will be a number
Now you just need to pick the highest value.
Since you have an XPath that returns all the desired elements, you just need to grab the href attribute from each one, split the href by '=' to get the id (2nd part of string), find the largest id, and then use the id to find the element you want and click on it.
invoices = driver.find_elements_by_xpath("//a[contains(#href, '/payment/getinvoice/')]")
ids = []
for invoice in invoices
ids.append(invoice.get_attribute("href").split('=')[2])
results = list(map(int, ids)) // you can't do max on a list of string, you won't get the right answer
id = max(results)
driver.find_element_by_xpath("//a[#href='/pl/payment/getinvoice/?id_invoices=" + id + "']").click

Categories