XPath working in scrapy but not in selenium - python

I have an xpath which works in python-scrapy and also in the firebug extension of firefox. But, it is not working in python-selenium. The code I am using in selenium is this
xpath = ".//div[#id='containeriso3']/div/a[1]/#href"
browser.find_element_by_xpath(xpath)
This gives an InvalidSelectorException error. Does selenium use some other xpath version?

That isn't going to get you an element. You need to take the #href attribute off.
Use .//div[#id='containeriso3']/div/a[1]
Then use get_attribute to get the href from it.

Related

Finding specifig value in class - Selenium (Python)

I'm trying to scrap a website using selenium.
I tried using XPATH, but the problem is that the rows on the website change over time...
How can I scrap the website, so that it gives me the output '21,73' ?
<div class="weather_yesterday_mean">21,73</div>
You can just use querySelector that accepts CSS selectors. I personally like them way more than XPath:
elem = driver.find_element_by_css_selector('div.weather_yesterday_mean')
result = elem.text
If that suits you, please read a bit about CSS selectors, for example here: https://www.w3schools.com/cssref/css_selectors.asp

Unable to locate element whose xpath contains lots 'div'

I am trying to locate an element that is a simple message (not a hyperlink) inside a chatbot. I tried to get xpath using 'inspect' and tried this -
driver.find_element_by_xpath("//[#id='app']/div[1]/div/div/div/div/div[2]/div[2]/div/div[2]/div[3]/div[3]/div[1]/div/div/speak").text
This works but this is not a reliable solution. I tried to shorten xpath using starts-with or contains but dint worked.
Is there any other locator other than xpath that I can use when there are lots of 'div' in xpath? What does this means in xpath above
[#id='app']
You should try to write custom xpath instead of generate it by browser.
It will looks like "//[#id='app']//speak".

Python Webdriver Unable to locate the xpath in facebook for Liked and shared button in pages

i am writing a code on python by using selenium that login into Facebook and Like a Facebook page i requested. it works to login but after opening the Facebook page i requested, it wont like the page it shows error saying 'Attribute-error: 'list' object has no attribute 'click''. maybe it didn't get the correct xpath ,any ideas?
use chropath extension in chrome
See line 27 of your code: you are using find_elementS instead of find_element.
find_elements always returns a list of elements, so when you are trying to do like.click(), it fails. Try using find_element_by_xpath at the line 27 of your code, it should work.
Good luck!

Xpath is correct but Scrapy doesn't work

I'm trying to download two fields from a webpage, I identify the XPath expressions for each one and then run the spider, but nothing is downloaded.
The webpage:
http://www.morningstar.es/es/funds/snapshot/snapshot.aspx?id=F0GBR04MZH
The field I want to itemize is ISIN.
The spider runs without errors, but the output is empty.
Here is the line code:
item['ISIN'] = response.xpath('//*[#id="overviewQuickstatsDiv"]/table/tbody/tr[5]/td[3]/text()').extract()
Try to remove tbody from XPath:
'//*[#id="overviewQuickstatsDiv"]/table//tr[5]/td[3]/text()'
Note that this tag is added by your browser while page rendering and it's absent in page source
P.S. I suggest you to use IMHO even better XPath:
'//td[.="ISIN"]/following-sibling::td[contains(#class, "text")]/text()'
I think response.selector was not given. Try this.
response.selector.xpath('//*[#id="overviewQuickstatsDiv"]/table/tbody/tr[5]/td[3]/text()').extract()

Selenium open pop up window [Python]

I am trying to click a link by:
driver.find_element_by_css_selector("a[href='javascript:openhistory('AXS0077')']").click()
This works nice if the link opens in a new window but in this case the link actually opens a pop up window. When I try clicking the link with this method, using selenium it gives me an error:
Message: u"The given selector
a[href='javascript:openhistory('AXS0077')'] is either invalid or does
not result in a WebElement. The following error
occurred:\nInvalidSelectorError: An invalid or illegal selector was
specified"
Is this not the right way ? because
I think there may be some different way to deal with pop windows.
Your css selector could be more generic, perhaps:
driver.find_element_by_css_selector("a[href^='javascript']").click()
You've got all kinds of crazy overlapping quotation marks there. You're probably confusing it.
I have more success using find_by_xpath
Take this site as an example popups
I use firebug to inspect the element and get the xpath.
Then using the following works perfectly.
from selenium import webdriver
baseurl="http://www.globalrph.com/davescripts/popup.htm"
dr = webdriver.Firefox()
dr.get(baseurl)
dr.find_element_by_xpath("/html/body/div/center/table/tbody/tr[7]/td/div/table/tbody/tr/td[2]/div[1]/form/table/tbody/tr[4]/td[1]/a").click()

Categories