Below xpath works in firefox but not in chrome
any ideas??
/html/body/app/div/compliance/div/main/reporting/div[1]/div[2]/download-button/iron-dropdown/div/div/button[2]
Nothing screams at me as being incorrect about your xpath, but without knowing the page you are interacting with it's hard to tell.
Have you tried searching that xpath in chrome dev tools?
Also if possible try to key off of attributes the button has rather than it's index and path. That generally makes for a more robust Xpath.
E.G. //button[#attribute="specificAttributeValue"]
The // indicates that it is a relative path.
Related
I'm searching for a tag in class.I tried many methods but I couldn't get the value.
see source code
The data I need is inside the "data-description".
How can i get the "data-description" ?
I Tried some method but didn't work
driver.find_element_by_name("data-description")
driver.find_element_by_css_selector("data-description")
I Solved this method:
icerisi = browser.find_elements_by_class_name('integratedService ')
for mycode in icerisi:
hizmetler.append(mycode.get_attribute("data-description"))
Thanks for your help.
I think css selector would work best here. "data-description" isn't an element, it's an attribute of an element. The css selector for an element with a given attribute would be:
[attribute]
Or, to be more specific, you could use:
[attribute="attribute value"]
Here's a good tip:
Most web browsers have a way of copying an elements Selector or XPATH. For example, in Safari if you view the source code then right-click on an element it will give you the option to copy it. Then select XPATH or Selector and in your code use driver.find_element_by_xpath() or driver.find_element_by_css_selector(). I am certain Google Chrome and Firefox have similar options.
This method is not always failsafe, as the XPATH can be very specific, meaning that slight changes to the website will cause your script to crash, but it is a quick and easy solution, and is especially useful if you don't plan on reusing your code months or years later.
Issue:
Upon webdriving to chrome://settings/content using chromedriver and selenium, I came across the issue where no elements could be found even if I gave the exact xpath copied from chrome dev tools or if I varied my search method e.g. using find_element_by_tag_name() instead and looking for more basic elements e.g. the <h1>Settings</h1> element.
This is not an issue of my search method as I can go to any other web page and select elements correctly.
Is this a security feature of Chrome which stops webdriving in their settings or something alike?
Specs:
Python3.7
Chromedriver
Selenium - latest version
it Shadow-DOM, using CSS selector select the /deep/
driver.find_element_by_css_selector('settings-ui /deep/ h1')
# or
driver.find_element_by_css_selector('* /deep/ h1')
unfortunately, there is no permission that would allow you to access chrome:// URL scheme.
This is an explicit safety mechanism against potentially malicious changes to Chrome settings.
You can get the access if you enable extensions-on-chrome-urls flag, but obviously you can't do that on machines you don't fully control.
Additionally, there is no API to manipulate users in Chrome.
source: Here
My code is:
driver.get("http://www.thegoodguys.com.au/buyonline/SearchDisplay?pageSize=16&beginIndex=0&searchSource=Q&sType=SimpleSearch&resultCatEntryType=2&showResultsPage=true&pageView=image&searchTerm=laptops")
link=();
linkPrice=();
price=();
productName=[];
Site='Harvey Norman'
link=driver.find_elements_by_class_name("photo")
linkPrice=driver.find_elements_by_class_name("product-title")
price=driver.find_elements_by_xpath("//div[#class='purchase']/span/span")
I am not sure whether the supplied xpath and class_name are correct. Could some one verify them and please let me know how to find them
In firefox you can simply use the developer tools or firebug to check the html for classes and element ids. Following the link in your question I can find a class called photo but for linkPrice and price you should use other classes.
Try:
price=driver.find_elements_by_class_name("price")
linkPrice=driver.find_elements_by_class_name("addtocart")
Which gives me:
price[0].text
u'$496'
linkPrice[0].text
u'ADD TO CART'
You can verify Xpath using developer tools console in chrome e.g $x("//foo") or $(".foo")
Firebug for Firefox will also let you verify
Also browsers will suggest Xpath for you but these are often verbose and unstable so would recommend hand crafting
I am trying to click a link by:
driver.find_element_by_css_selector("a[href='javascript:openhistory('AXS0077')']").click()
This works nice if the link opens in a new window but in this case the link actually opens a pop up window. When I try clicking the link with this method, using selenium it gives me an error:
Message: u"The given selector
a[href='javascript:openhistory('AXS0077')'] is either invalid or does
not result in a WebElement. The following error
occurred:\nInvalidSelectorError: An invalid or illegal selector was
specified"
Is this not the right way ? because
I think there may be some different way to deal with pop windows.
Your css selector could be more generic, perhaps:
driver.find_element_by_css_selector("a[href^='javascript']").click()
You've got all kinds of crazy overlapping quotation marks there. You're probably confusing it.
I have more success using find_by_xpath
Take this site as an example popups
I use firebug to inspect the element and get the xpath.
Then using the following works perfectly.
from selenium import webdriver
baseurl="http://www.globalrph.com/davescripts/popup.htm"
dr = webdriver.Firefox()
dr.get(baseurl)
dr.find_element_by_xpath("/html/body/div/center/table/tbody/tr[7]/td/div/table/tbody/tr/td[2]/div[1]/form/table/tbody/tr[4]/td[1]/a").click()
I am scraping individual listing pages from justproperty.com (individual listing from the original question no longer active).
I want to get the value of the Ref
this is my xpath:
>>> sel.xpath('normalize-space(.//div[#class="info_div"]/table/tbody/tr/td[norma
lize-space(text())="Ref:"]/following-sibling::td[1]/text())').extract()[0]
This has no results in scrapy, despite working in my browser.
The following works perfectly in lxml.html (with modern Scrapy uses):
sel.xpath('.//div[#class="info_div"]//td[text()="Ref:"]/following-sibling::td[1]/text()')
Note that I'm using // to get between the div and the td, not laying out the explicit path. I'd have to take a closer look at the document to grok why, but the path given in that area was incorrect.
Don't create XPath expression by looking at Firebug or Chrome Dev Tools, they're changing the markup. Remove the /tbody axis step and you'll receive exactly what you're look for.
normalize-space(.//div[#class="info_div"]/table/tr/td[
normalize-space(text())="Ref:"
]/following-sibling::td[1]/text())
Read Why does my XPath query (scraping HTML tables) only work in Firebug, but not the application I'm developing? for more details.
Another XPath that gets the same thing: (.//td[#class='titles']/../td[2])[1]
I tried your XPath using XPath Checker and it works fine.