Understanding selenium web-elements list - python

Ok so I had a list of web items created by seleniums Webdriver.find_elements_by_path method, and I had trouble utilizing the data.
Ultimately, the code I needed to get what I wanted was this:
menu_items=driver.find_elements_by_xpath('//div[#role="menuitem"]')[-2]
I was only ever able to get any meaningful data here by using a negative index. If I used any positive indices, the menu_items would return nothing.
However, when I had left menu_items as follows:
menu_items=driver.find_elements_by_xpath('//div[#role="menuitem"]')
I could iterate through the list and gain access to the webelements properly, meaning if I had"for i in menu_items" I could call something like i.text and have the desired result. But again, I could not do menu_items[2]. I am new to selenium so if someone could explain what is going on here, that would be very helpful

This line of code...
menu_items=driver.find_elements_by_xpath('//div[#role="menuitem"]')[-2]
...indicates you are considering the second element counting from the right instead of the left as list[-1] refers to the last element within the list and list[-2] refers to the second last element in the list.
A bit more about your usecase would have helped us to construct a canonical answer. The number of visible/interactable elements at any given point of time and/or the sequence in which the elements gets visible/interactable may vary based on the type of elements present in the DOM Tree. Incase the HTML DOM consists of JavaScript, Angular, ReactJS, enabled elements even the position of the elements may differ as well.

Related

How to manipulate lists in Python selenium?

I have some entries that are to be made in a web portal.
The entries that are to be made are in excel file. I have imported those in python and converted them to lists so that I can access them to pick up individual entry.
Will try to explain code approach here
find first element and use send keys to first element of the list
same for next two fields and then save the entry.
(ignore syntax in below)
driver.find_element_by_name("01st elementname").send_keys(list1[0])
driver.find_element_by_name("02nd elementname").send_keys(list2[0])
driver.find_element_by_name("03rd elementname").send_keys(list3[0])
Till this portion is done.
Next I have to move to second and make entry with next index
driver.find_element_by_name("01st elementname").send_keys(list1[1])
driver.find_element_by_name("02nd elementname").send_keys(list2[1])
driver.find_element_by_name("03rd elementname").send_keys(list3[1])
move to next.
How can I do this ? Not being able to figure out for loop for this.
I hope I explained this well. Could be very simple but I am not from programming background so need some help.
You mean how to loop over the list like this?
 for i in range(len(list1)):
driver.find_element_by_name("01st elementname").send_keys(list1[i])
driver.find_element_by_name("02nd elementname").send_keys(list2[i])
driver.find_element_by_name("03rd elementname").send_keys(list3[i])
       //submit click() if required
See also https://www.w3schools.com/python/python_for_loops.asp

BeautifulSoup: extracting attribute for various items

Let's say we have HTML like this (sorry, I don't know how to copy and paste page info and this is on an intranet):
And I want to get the highlighted portion for all of the questions (this is like a Stack Overflow page). EDIT: to be clearer, what I am interested in is getting a list that has:
['question-summary-39968',
'question-summary-40219',
'question-summary-42899',
'question-summary-34348',
'question-summary-32497',
'question-summary-35308',
...]
Now I know that a working solution is a list comprehension where I could do:
[item["id"] for item in html_df.find_all(class_="question-summary")]
But this is not exactly what I want. How can I directly access question-summary-41823 for the first item?
Also, what is the difference between soup.select and soup.get?
I thought I would post my answer here if it helps others.
What I am trying to do is access the id attribute within the question-summary class.
Now you can do something like this and obtain it for only the first item (object?):
html_df.find(class_="question-summary")["id"]
But you want it for all of them. So you could do this to get the class data:
html_df.select('.question-summary')
But you can't just do
html_df.select('.question-summary')["id"]
Because you have a list filled with bs4.elements. So you need to iterate over the list and select just the piece that you want. You could do a for loop but a more elegant way is to just use list comprehension:
[item["id"] for item in html_df.find_all(class_="question-summary")]
Breaking down what this does, it:
It first creates a list of all the question-summary objects from the soup
Iterates over each element in the list, which we've named item
Extracts the id attribute and adds it to the list
Alternatively you can use select:
[item["id"] for item in html_df.find_all(class_="question-summary")]
I prefer the first version because it's more explicit, but either one results in:
['question-summary-43960',
'question-summary-43953',
'question-summary-43959',
'question-summary-43947',
'question-summary-43952',
'question-summary-43945',
...]

A more efficient way of finding value in dictionary and its position

I have a dictionary which contains (roughly) 6 elements, each of an element which looks like the following:
What I want to do is find a particular domain (that I pass through a method) and if it exists, it stores the keyword and its position within an object. I have tried the following
def parseGoogleResponse(response, website):
i = 0
for item in response['items']:
if(item['formattedUrl'] == website):
print i
break;
i++
This approach seems to be a bit tedious and also i also remains the same at i = 10 and I'm pretty sure that this is a more efficient way. I also have to keep in consideration that if the website is not found the first time, it then queries the API for a maximum up to 5 pages, each page contains 6 search results so I somehow have to calculate the position if it is on a different page.
Any ideas
Dictionaries in Python are not ordered. There is no way to find something's position in a dictionary, unlike list type objects.
You can rather easily check for the existence of a value in the dictionary with something like:
if website in response['items'].values():
# If you enter this section, you know it's in the dictionary
else:
# If you end up here, it isn't in the dictionary

Selecting all items individually in a list

I was wondering if it is possible to re-select each and every item in the rsList?
I am citing a simple example below but I am looking at hundreds of items in the scene and hence below are the simplest form of coding I am able to come up with base on my limited knowledge of Python
rsList = cmds.ls(type='resShdrSrf')
# Output: [u'pCube1_GenShdr', u'pPlane1_GenShdr', u'pSphere1_GenShdr']
I tried using the following cmds.select but it is taking my last selection (in memory) - pSphere1_GenShdr into account while forgetting the other 2 even though all three items are seen selected in the UI.
Tried using a list and append, but it also does not seems to be working and the selection remains the same...
list = []
for item in rsList:
list.append(item)
cmds.select(items)
#cmds.select(list)
As such, will it be possible for me to perform a cmds.select on each of the item individually?
if your trying to just select each item:
import pymel.core as pm
for i in pm.ls(sl=True):
i.select()
but this should have no effect in your rendering
I think for mine, it is a special case in which I would need to add in mm.eval("autoUpdateAttrEd;") for the first creation of my shader before I can duplicate.
Apparently I need this command in order to get it to work

Filter List of Strings By Keys

My project has required this enough times that I'm hoping someone on here can give me an elegant way to write it.
I have a list of strings, and would like to filter out duplicates using a key/key-like functionality (like I can do with sorted([foo, key=bar)).
Most recently, I'm dealing with links.
Currently I have to create an empty list, and add in values if
Note: name is the name of the file the link links too -- just a regex matching
parsed_links = ["http://www.host.com/3y979gusval3/name_of_file_1",
"http://www.host.com/6oo8wha55crb/name_of_file_2",
"http://www.host.com/6gaundjr4cab/name_of_file_3",
"http://www.host.com/udzfiap79ld/name_of_file_6",
"http://www.host.com/2bibqho4mtox/name_of_file_5",
"http://www.host.com/4a31wozeljsp/name_of_file_4"]
links = []
[links.append(link) for link in parsed_links if not name(link) in
[name(lnk) for lnk in links]]
I want the final list to have the full links (so I can't just get rid of everything but the filenames and use set); but I'd like to be able to do this without creating an empty list every time.
Also, my current method seems inefficient (which is significant as it is often dealing with hundreds of links).
Any suggestions?
Why not just use a dictionary?
links = dict((name(link), link) for link in parsed_links)
If I understand your question correctly, your performance problems may come from the list comprehension that is repeatedly evaluated in a tight loop.
Try caching the result by putting the list comprehension outside of the loop, then use another comprehension instead of append() on an empty list:
linkNames = [name(lnk) for lnk in links]
links = [link in parsed_links if not name(link) in linkNames]

Categories