Python Selenium, obtaining XPath given href - python

Once I obtain an href through the use of Selenium in Python.
Is there a way to find the XPath based on that href and click on that XPath?
For Example:
href = '/sweatshirts/vct65b9ze/yn2gxohw4'
How would I find the XPath on that page?

When the element is for instance a link, you can use the following code:
driver.find_element_by_xpath('//a[#href="/sweatshirts/vct65b9ze/yn2gxohw4"]');

Related

Finding specifig value in class - Selenium (Python)

I'm trying to scrap a website using selenium.
I tried using XPATH, but the problem is that the rows on the website change over time...
How can I scrap the website, so that it gives me the output '21,73' ?
<div class="weather_yesterday_mean">21,73</div>
You can just use querySelector that accepts CSS selectors. I personally like them way more than XPath:
elem = driver.find_element_by_css_selector('div.weather_yesterday_mean')
result = elem.text
If that suits you, please read a bit about CSS selectors, for example here: https://www.w3schools.com/cssref/css_selectors.asp

Unable to locate element whose xpath contains lots 'div'

I am trying to locate an element that is a simple message (not a hyperlink) inside a chatbot. I tried to get xpath using 'inspect' and tried this -
driver.find_element_by_xpath("//[#id='app']/div[1]/div/div/div/div/div[2]/div[2]/div/div[2]/div[3]/div[3]/div[1]/div/div/speak").text
This works but this is not a reliable solution. I tried to shorten xpath using starts-with or contains but dint worked.
Is there any other locator other than xpath that I can use when there are lots of 'div' in xpath? What does this means in xpath above
[#id='app']
You should try to write custom xpath instead of generate it by browser.
It will looks like "//[#id='app']//speak".

Python - trying to get URL (href) from web scraping using Scrapy

I'm trying to get the URL, or href, from a webpage using web scraping, specifically using Scrapy. However, it returns an empty list when I response.xpath('XPATH').extract() the href link. The HTML page structure is:
The specific HTML element href I'm trying to get is: MAGOMEDOVA<br>MADINA
The result of the xpath command is:
For context, I'm trying to get the information in each person's URL and extract it, but I'm unable to retrieve the href from the web page.
I copied the full xpath of the HTML element, and it's: /html/body/div1/div1/div[6]/div/div2/div/div2/div2/div/div2/div/div/div2/div1/a.
But this still returns [] when I run response xpath command.
In this situation I personally wouldn't use xpath. I wouldn't even use Scrapy. In this situation I believe the simplest solution would be to instead use BeautifulSoup and requests together.
import BeautifulSoup as bs4
import requests
url=YOUR_URL_HERE
soup=BeautifulSoup(requests.get(url).text)
links=soup.find_all('a')
urls=[x['href'] for x in links]
This code will give you the href of every link on the page in a list, and you can filter the list further by the class or whatever you need.
You can simply use response.xpath ("//a[#class='redNoticeItem__labelLink']").extract()

Using Selenium and conditional xpath to click element

Currently I am learning to use selenium to automate testing. One of my task is to click to the next page on a website. The xpath I copied from the specific button is the following:
xpath = '//*[#id="pagination"]/div/div[1]/a[4]'
So when I use driver.find_element_by_xpath(xpath).click() it will bring me to the next page.
To click through multiple pages I would like to find the specified element based on the condition that the text is equal to the correct page.
For this I tried the following conditional xpath:
xpath = //*[#id="pagination"]/div/div[1]/a[text()='page_num']
where page_num is the specific page I want to click on.
Example:
for the follwing element I would use the xpath:
element = 2
xpath = //*[#id="pagination"]/div/div[1]/a[text()='2']
I would expect that selenium would click to the specified page but instead I get an Error message that the xpath doesn't exist.
What should be the correct conditional xpath name?

XPath working in scrapy but not in selenium

I have an xpath which works in python-scrapy and also in the firebug extension of firefox. But, it is not working in python-selenium. The code I am using in selenium is this
xpath = ".//div[#id='containeriso3']/div/a[1]/#href"
browser.find_element_by_xpath(xpath)
This gives an InvalidSelectorException error. Does selenium use some other xpath version?
That isn't going to get you an element. You need to take the #href attribute off.
Use .//div[#id='containeriso3']/div/a[1]
Then use get_attribute to get the href from it.

Categories