I am trying to locate an element that is a simple message (not a hyperlink) inside a chatbot. I tried to get xpath using 'inspect' and tried this -
driver.find_element_by_xpath("//[#id='app']/div[1]/div/div/div/div/div[2]/div[2]/div/div[2]/div[3]/div[3]/div[1]/div/div/speak").text
This works but this is not a reliable solution. I tried to shorten xpath using starts-with or contains but dint worked.
Is there any other locator other than xpath that I can use when there are lots of 'div' in xpath? What does this means in xpath above
[#id='app']
You should try to write custom xpath instead of generate it by browser.
It will looks like "//[#id='app']//speak".
Related
I've been researching this for two days now. There seems to be no simple way of doing this. I can find an element on a page by downloading the html with Selenium and passing it to BeautifulSoup, followed by a search via classes and strings. I want to click on this element after finding it, so I want to pass its Xpath to Selenium. I have no minimal working example, only pseudo code for what I'm hoping to do.
Why is there no function/library that lets me search through the html of a webpage, find an element, and then request it's Xpath? I can do this manually by inspecting the webpage and clicking 'copy Xpath'. I can't find any solutions to this on stackoverflow, so please don't tell me I haven't looked hard enough.
Pseudo-Code:
*parser is BeautifulSoup HTML object*
for box in parser.find_all('span', class_="icon-type-2"): # find all elements with particular icon
xpath = box.get_xpath()
I'm willing to change my code entirely, as long as I can locate a particular element, and extract it's Xpath. So any other ideas on entirely different libraries are welcome.
So, I have an XPath (I've verified this works and has 1 unique value via Google Chrome Tools.
I've tried various methods to try and get this xpath, initally using right click > copy xpath in chrome gave me:
//*[#id="hdr_f0f7cdb71b9a3f44782b87386e4bcb3e"]/th[2]/span/a
However, this ID changes on every reload.
So, i eventually got it down to:
//th[#name="name"]/span/a/text()
element = driver.find_element_by_xpath("//th[#name='name']/span/a/text()")
print(element)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//th[#name='name']/span/a/text()"}
check this:
len(driver.find_elements_by_xpath('//*[contains(#id, "hdr_")]'))
if you won't get a lot of elements you're done with this:
driver.find_elements_by_xpath('//*[contains(#id, "hdr_")]')
You should not be using /text() for a WebElement. Use "//th[#name='name']/span/a" as the Xpath and print the text using element.text (Not sure about the exact method for Python, but in Java it is element.getText() )
I will suggest using absolute XPath rather than relative XPath it might resolve it if the id is changing with every load. please see below how absolute XPath is different from relative using the google search bar.
Relative XPath - //*[#id="tsf"]/div[2]/div/div[1]/div/div[1]/input
absolute XPath - /html/body/div/div[3]/form/div[2]/div/div[1]/div/div[1]/input
I understand as you said you cannot share a link but people here can help if you can share inspect element snapshot showing DOM. so that if there is an issue in XPath it can be rectified. Thanks :-)
for chrome, I install ChroPath to find elements on the page.
I want to find XPath for like elements on Instagram Page, but seems that not work :
//span[contains(#class,'glyphsSpriteHeart__outline__24__grey_9 u-__7')]
also, I try it :
/html[1]/body[1]/div[3]/div[1]/div[2]/div[1]/article[1]/div[2]/section[1]/span[1]/button[1]/span[1]
when selenium click :
elenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"div._2dDPU.vCf6V div.zZYga div.PdwC2._6oveC article.M9sTE.L_LMM.JyscU div.eo2As section.ltpMr.Slqrh span.fr66n button.coreSpriteHeartOpen.oF4XW.dCJp8 > span.glyphsSpriteHeart__outline__24__grey_9.u-__7"}
how can I find XPath? any good extension or something?
how can I find XPath? any good extension or something?
You cannot "find" the Xpath of an element. There are many, many XPath's that will find any element. Some will be stable, others will be unstable. The decision on which Xpath to use is based upon your understanding and experience of Selenium, and you understanding of how the Application Under Test is written and behaves.
If you are looking for a tool to experiment with different XPaths, then Chrome's built-in Developer Tools Console allows you to test both Xpath & CSS Selectors;
In your specific scenario about finding elements by class name, then CSS Selector is a much better choice than XPath as CSS selectors will treat multiple classes as an array where as XPath sees "class" as a literal string, hence why you needed to use "contains".
This might help:
https://selectorgadget.com/
This as well, to understand what you are manipulating:
https://www.w3schools.com/xml/xpath_syntax.asp
As for your example where you go down the tree using index numbers (ie: /html[1]/body[1]), A slight change in the site will make your script to fail. Find a way to build something more robust! Also have a look at CSS selectors if you object's appearance is known in advance.
To get all like buttons on instagram use css selector below:
span[aria-label="Like"]
You can get some helpful details here: https://www.w3schools.com/cssref/css_selectors.asp
There is a webpage consisting of anchor elements. I want to select the text and the attribute href values from all of anchor elements. I am using scrapy's xpath engine to do the same. So I have tried the follows without much success:
response.xpath('//a[position()>1]/(text()|#href)').extract()
response.xpath('//a[position()>1]/text()/#href').extract()
But these errors out.
Is this possible in a xpath in the first place?
Ps: its probably not correct to say scrapy's xpath engine - I think its lxml python package.
I have created a list of elements matching an xpath and would like to click through each successively. However, if I use the get_attribute("href") command I get a 'unicode' object has no attribute 'click' error. This is because href is a string. If I don't use get_attribute and simply use this command:
driver.find_elements_by_xpath(".//div/div/div[3]/table//tr[12]/td/table//tr/td/a")
I get a list full of elements. I can successfully click on the first link in the list; however, when I click on the second I get this error: 'Element not
found in the cache - perhaps the page has changed since it was looked up'
I imagine that the reason that the page links I am trying to iterate through are generated via a search query into java (this is one of the href links:
javascript:__doPostBack('ctl00$Content$listJobsByAll1$GridView2','Page$3') )
One more piece of relevant information: there are only two attributes at this xpath location: href and the text.
So, given that I am dealing with a java website and only the two attributes, I am hoping someone can tell me which webdriver commands I can use to get a series of clickable static links. Beyond a specific answer, any advice on how I could have figured this out myself would be helpful.
if you click on a link with selenium, you are changing the current page. The page that you are directed to doesn't have the next element.
to get links use:
'.//tag/#href'
you can try:
for elem in elems:
elem.click()
print browser.current_url
browser.back()