Selenium Python Class Names are the same - python

I am trying to code a program that displays text from a website. the classes that the text is in have the same name. i have tried to do xpath, but I cant get that to work. i don't really know how to explain my question, sorry about that.
from selenium.webdriver.common.keys import Keys
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
(deleted code that doesn't affect this)
(I have a problem with the code below)
print('Assignment 1')
time.sleep(6)
assignment1 = driver.find_element_by_class_name("calendar-title-text")
print(assignment1.text)
assignment2 = driver.find_element_by_class_name("calendar-title-text"[1])
print(assignment2.text)
time.sleep(10)

So, I think what you are asking is: "How can I select 1 element using xpath if I have multiple elements?"
And, to answer, I provided a photo to you for demonstration purposes.
Say, for example, we go to Google.com and we search "Hello World Python"
Link: Hello World Python in Google.com
In this search, we are given a lot of results
When you open your Google Chrome Developer Tools ( F12 ) and you navigate to the Elements tab and press CTRL + F, a search bar will display at the bottom of the page.
When we search through the HTML code, we have an xpath that gives us multiple results when we search for a class name.
//div[#id='search']//div[#id='rso']//div[#class='g']
This xpath returns us a 1 of 12 result. But, we want 1 of 1. In order to accomplish this, we need to wrap our xpath in parenthesis (). This will help us in isolating only 1 of our elements.
(//div[#id='search']//div[#id='rso']//div[#class='g'])[1]
This xpath will result in 1 of 1. This should allow you to be able to interact with the web element.
driver.find_element(By.XPATH, "(//div[#id='search']//div[#id='rso']//div[#class='g'])[1]").click()
If you wanted to interact with the text of an element, as an example, we can print out the highlighted text with the following command
xpath = "((//div[#id='search']//div[#id='rso']//div[#class='g'] )[1]//div[#data-hveid]//div)[7]//div"
element_text = driver.find_element(By.XPATH, xpath).text
print(f'My Text Is: {element_text}')

Related

Is there a way to scrape the page url (or a part of it) located in the address bar using selenium in python?

I'm working on a huge dataset of movies and I'm trying to get the IMDb ID of each movie from the IMDB website. I'm using selenium in Python. I checked, but inside the movie page you can't find the IMDB code. It is contained into the link of the page, which is in the address bar and I don't know how to scrape it. Are there any methods of doing this?
This is an example of the page:
I need to get the underlined part of the url.
Does anyone know how to do it?
If you want to fetch the title of movie url you need to first fetch the current_url and then using python split() function you can get the second last string.
currenturl=driver.current_url.split("/")[-2]
print(currenturl)
This will returned tt1877830
Try driver.current_url
Reference: https://selenium-python.readthedocs.io/api.html
Also, worth noting that IMDB has an official API. You could look at that as well https://aws.amazon.com/marketplace/pp/prodview-bj74roaptgdpi?sr=0-1&ref_=beagle&applicationId=AWSMPContessa
To extract the page url 9or a part of it i.e. the underlined part) e.g. tt1877830, you can extract the current_url and split it with respect to the / character and you can use either of the following solutions:
Using Positive Index:
driver.get('https://www.imdb.com/title/tt1877830/?ref_=fn_al_tt_1')
WebDriverWait(driver, 20).until(EC.url_contains("title"))
print(driver.current_url.split("/")[4])
Console Output:
tt1877830
Using Negative Index:
driver.get('https://www.imdb.com/title/tt1877830/?ref_=fn_al_tt_1')
WebDriverWait(driver, 20).until(EC.url_contains("title"))
print(driver.current_url.split("/")[-2])
Console Output:
tt1877830
Note : You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

Xpath locator unable to detect the element

I am trying to select a button using selenium however i believe i am doing something wrong while writing my xpath can anyone please help me on this i need to select the currency Euro.
link :- https://www.booking.com/
Locator which i want to select
Locator which i have written
USD = self.find_element_by_xpath(f"//a[contains(text(),'selected_currency='Euro']")
USD.click()
The below xpath
//a[contains(#href,'EUR') and starts-with(#class,'bui-list')]
is present two times in HTML DOM.
Steps to check:
Press F12 in Chrome -> go to element section -> do a CTRL + F -> then paste the xpath and see, if your desired element is getting highlighted with 1/1 matching node.
In case you would like to select the first one which is in Suggest for you, there is no need to do anything with respect to XPath.
#pmadhu answer is misleading, since why would anyone look for first index in XPath when using with Selenium ? If there are multiple matching node, Selenium is always going to pick up the first element, So it really does not make any sense to me that why would someone use [1]
Nevertheless,
to click on Suggest for you EURO :
try:
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//a[contains(#href,'EUR') and starts-with(#class,'bui-list')]"))).click()
except:
pass
Imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
In case you are looking to click on second matching node for EUR,
in that case it make sense to use index 2.
XPath would look something like this :
(//a[contains(#href,'EUR') and starts-with(#class,'bui-list')])[2]
The text you are looking for: selected_currency='EUR' is in the data-modal-header-async-url-param parameter of the a tag. You should run the contains on that.
Edit:
Locator: //a[contains(#data-modal-header-async-url-param, 'selected_currency=EUR')]
As already explained, selected_currency=EUR is in the attribute - data-modal-header-async-url-param.
However you can select the required option, with below code.
The xpath for the EUR option can be - //div[contains(text(),'EUR')]. Since it highlights 2 elements in the DOM using this xpath- (//div[contains(text(),'EUR')])[1]. Its important to find unique locators. Link to refer
# Imports required for Explicit wait
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
driver.get("https://www.booking.com/")
wait = WebDriverWait(driver,30)
# Click on Choose currency option
wait.until(EC.element_to_be_clickable((By.XPATH,"//span[#class='bui-button__text']/span[contains(text(),'INR')]"))).click()
# Click on the EUR option.
euro_option = wait.until(EC.element_to_be_clickable((By.XPATH,"(//div[contains(text(),'EUR')])[1]")))
euro_option.click()

How do I access specific or all text elements using xpath locators?

Currently using Python and Selenium to scrape data, export to a CSV and then manipulate as needed. I am having trouble grasping how to build xpath statements to access specific text elements on a dynamically generated page.
https://dutchie.com/embedded-menu/revolutionary-clinics-somerville/menu
From the above page I would like to export the category (not part of each product, but a parent element) followed by all the text fields associated to a product card.
The following statement allows me to pull all the titles (sort of) under the "Flower" category, but from that I am unable to access all child text elements within that product, only a weird variation of title. The xpath approach seems to be ideal as it allows me to pull this data without having to scroll the page with key passes/javascript.
products = driver.find_elements_by_xpath("//div[text()='Flower']/following-sibling::div/div")
for product in products:
print ("Flower", product.text)
What would I add to the above statement if I wanted to pull the full set of elements that contains text for all children within the 'consumer-product-card__InViewContainer', within each category...such as flower, pre-rolls and so on. I expiremented with different approaches last night and different paths/nodes/predicates to try and access this information building off the above code but ultimately failed.
Also is there a way for me to test or visualize in some way "where I am" in terms of scope of a given xpath statement?
Thank you in advance!
I have tried some code for you please take a look and let me know if it resolves your problem.
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome()
wait = WebDriverWait(driver, 60)
driver.get('https://dutchie.com/embedded-menu/revolutionary-clinics-somerville/menu')
All_Heading = wait.until(
EC.visibility_of_all_elements_located((By.XPATH, "//div[contains(#class,\"products-grid__ProductGroupTitle\")]")))
for heading in All_Heading:
driver.execute_script("return arguments[0].scrollIntoView(true);", heading)
print("------------- " + heading.text + " -------------")
ChildElement = heading.find_elements_by_xpath("./../div/div")
for child in ChildElement:
driver.execute_script("return arguments[0].scrollIntoView(true);", child)
print(child.text)
Please find the output of the above code -
Hope this is what you are looking for. If it solve you query then please mark it as answer.

How to select a value from drop down menu in python selenium

I am writing a python script which will call a webpage and will select an option from the drop down to download that file. To do this task, I am using chropath. It is a browser extension which can give you the relative xpath or id for any button or field on the webpage and using that we can call it from python selenium script.
Above image shows the drop down menu in which I have to select 2019 as year and the download the file. In the lower part of the image, you can see that I have used chropath to get the relative xpath of the drop down menu which is //select[#id='rain']
Below is the code I am using:
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Firefox()
driver.get("<URL>")
driver.maximize_window()
grbf = driver.find_element_by_xpath("//select[#id='rain']")
grbf.send_keys('2019')
grbf_btn = (By.XPATH, "//form[1]//input[1]")
WebDriverWait(driver, 20).until(EC.element_to_be_clickable(grbf_btn)).click()
from the above code, you can see that I am using xpath to select the drop down grbf = driver.find_element_by_xpath("//select[#id='rain']") and then sending keys as 2019 i.e. grbf.send_keys('2019') and after that I am calling download button to download it. But for some reason, its always selecting year 1999 from the drop down. I am not able to understand what is wrong in this. Is this correct approach to solve this. Please help. Thanks
I had the same problem time ago. Try this:
from selenium.webdriver.support.ui import Select
grbf = Select(driver.find_element_by_xpath("//select[#id='rain']"))
grbf.select_by_value('2019')
In the select_by_value() you have to use the value of the element in the dropdown.
By the way, if an element has id, use it.
grbf = Select(driver.find_element_by_id('rain'))
Try below code:
select = Select(driver.find_element_by_xpath("//select[#id='rain']"))
select.select_by_visible_text('2019')
Another approches to deal with dropdown:
Using Index of Dropdown :
select.select_by_index(Paass index)
Using valueof Dropdown :
select.select_by_value('value of element')
Using visible text of Dropdown :
select.select_by_visible_text('element_text')
In my opinion, I don't think this is the correct approach. You try to select the option which is dropdown (not a text box like ), so send key command does not work.
What you need to do is try to inspect HTML changing when clicking the dropdown and try to XPath for an option that you want to select.
If you still stuck at this problem, I recommend using katalon recorder which is a chrome extension to allow you to record and do UI testing

Selenium: How to parse through a code after using selenium to click a dropdown

I'm trying to webscrape trough this webpage https://www.sigmaaldrich.com/. Up to now I have achieved for the code to use the requests method to use the search bar. After that, I want to look for the different prices of the compounds. The html code that includes the prices is not visible until the Price dropdown has been clicked. I have achieved that by using selenium to click all the dropdowns with the desired class. But after that, I do not know how to get the html code of the webpage that is generated after clicking the dropdowns and where the price is placed.
Here's my code so far:
import requests
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from time import sleep
#get the desired search terms by imput
name=input("Reagent: ")
CAS=input("CAS: ")
#search using the name of the compound
data_name= {'term':name, 'interface':'Product%20Name', 'N':'0+',
'mode':'mode%20matchpartialmax', 'lang':'es','region':'ES',
'focus':'product', 'N':'0%20220003048%20219853286%20219853112'}
#search using the CAS of the compound
data_CAS={'term':CAS, 'interface':'CAS%20No.', 'N':'0','mode':'partialmax',
'lang':'es', 'region':'ES', 'focus':'product'}
#get the link of the name search
r=requests.post("https://www.sigmaaldrich.com/catalog/search/", params=data_name.items())
#get the link of the CAS search
n=requests.post("https://www.sigmaaldrich.com/catalog/search/", params=data_CAS.items())
#use selenium to click in the dropdown(only for the name search)
driver=webdriver.Chrome(executable_path=r"C:\webdrivers\chromedriver.exe")
driver.get(r.url)
dropdown=driver.find_elements_by_class_name("expandArrow")
for arrow in dropdown:
arrow.click()
As I said, after this I need to find a way to get the html code after opening the dropdowns so that I can look for the price class. I have tried different things but I don't seem to get any working solution.
Thanks for your help.
You can try using the Selenium WebDriverWait. WebDriverWait
WebDriverWait wait = new WebDriverWait(driver, 30);
WebElement element = wait.until(ExpectedConditions.presenceOfElementLocated(css));
First, You should use WebDriverWait as Austen had pointed out.
For your question try this:
from selenium import webdriver
driver=webdriver.Chrome(executable_path=r"C:\webdrivers\chromedriver.exe")
driver.get(r.url)
dropdown=driver.find_elements_by_class_name("expandArrow")
for arrow in dropdown:
arrow.click()
html_source = driver.page_source
print(html_source)
Hope this helps you!

Categories