Selenium Webscraper with Python won't let me click an element - python

I am trying to put together a web scraper to get locations by zip code entered by the user. As of right now I am able to navagate to the website but I am not able to click on the drop down button that allows you to enter in a zip code. Here is what I have so far
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup
import time
import pandas as pd
from selenium.webdriver.common.by import By
zipcode = input("What zip code would you like to search? ")
out_table = 'Oreilly_autp_parts_addresses_{}.csv'.format(zipcode)
#Using Selenium to navigate to website, search zipcode and get html data
driver = webdriver.Chrome() #requires geckodriver.exe
driver.get('https://www.oreillyauto.com/')
time.sleep(2)
driver.maximize_window()
el = driver.find_element_by_class_name("site-store")
time.sleep(2)
driver.execute_script("arguments[0].setAttribute('class','site-store site-nav_item dispatcher-trigger--active')", el)
It seems to be clicking on the correct element but the drop down that is supposed to show up isn't there. HTML Code Block
Any help is much appreciated!

Related

How do I click a button on cookies pop up using Selenium?

Hi I want to click 'Save Services' using Selenium on this website to make the pop up disappear: https://www.hugoboss.com/uk/home. However I receive a timeout exception.
import numpy as np
import pandas as pd
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium import webdriver
import time
driverfile = r'C:\Users\Main\Documents\Work\Projects\extra\chromedriver'
driver = webdriver.Chrome(executable_path=driverfile)
driver.get("https://www.hugoboss.com/uk/men/")
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH,"//button[contains(text(),'SAVE SERVICES')]"))).click()
Further information: When I try to find the x_path of the button using class: //button[#data-testid='uc-save-button'] on the inspect element finder, It returns 0 results as if it does not exist?
I ran len(driver.window_handles) 10 seconds after the webpage was loaded and which returned 1, meaning selenium could see one window open only.
Your element is in a shadow root. Find your element in devtools, scroll up and you'll see this in the DOM:
To get into the shadowroot, an easy way is the get it's parent item then use JS to get the object.
Within that returned object you can find your button:
edit:: Updated the code from the original answer. This runs for me:
driver = webdriver.Chrome()
driver.implicitly_wait(10)
url = "https://www.hugoboss.com/uk/home"
driver.get(url)
driver.implicitly_wait(10)
shadowRoot = driver.find_element(By.XPATH,"//div[#id='usercentrics-root']").shadow_root
shadowRoot.find_element(By.CSS_SELECTOR, "button[data-testid='uc-save-button']").click()
#########
Update - a demo of the code working:
pip list tells me I'm using:
selenium 4.1.3

I cannot locate a download link by using Selenium in the drop down menu

I am trying to learn selenium and i am trying to download an excel file from a drop down menu.
Firstly, i click to button in order to open it and afterwards when i do inspect i am able to get to this part.
I am trying to click this part and download the file.
<span _ngcontent-bke-c150="" class="left-text">Excel</span>
Here is the link to the website: https://survey123.arcgis.com/
I don't think sharing my code would help because i am already stuck at that very specific part. I was able to login to the website through selenium and enter id and password but failed at downloading the excel file.
But here it is
import pandas as pd
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
PATH = "C:\Program Files (x86)\chromedriver.exe"
driver = webdriver.Chrome()
driver.get("https://survey123.arcgis.com/")
inputElement_user = driver.find_element_by_id("user_username")
inputElement_user.send_keys("myusername")
inputElement_password = driver.find_element_by_id("user_password")
inputElement_password.send_keys("mypassword")
giris_button = driver.find_element_by_id("signIn")
actions = ActionChains(driver)
actions.click(giris_button)
actions.perform()
continue_link = driver.find_element_by_partial_link_text('Rapor')
This might not exactly be the answer but I had also stuck on this point some time ago. I tried this <a href="file/path/or/link/here" download> and it worked.
the download property makes files downloadable, if the href is set properly. hope this works in your problem as well.

How to find window/iframe from Chrome DevTools

I'm trying to web scrape using Selenium, Python and Beautiful Soup. I am scraping this page, but I want to scrape information off the pop-up window that appears when you click on the 'i' (information) icons in the corner of each product. My code is as follows:
import requests
from bs4 import BeautifulSoup
import time
import selenium
import math
from selenium.webdriver.support.ui import WebDriverWait
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
import chromedriver_binary
import re
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome(ChromeDriverManager().install())
r = requests.get('https://dmarket.com/csgo-skins/product-card/ak-47-redline/field-tested')
driver.get('https://dmarket.com/csgo-skins/product-card/ak-47-redline/field-tested')
html_getter = BeautifulSoup(r.text, "html.parser")
data = html_getter.findAll(attrs={"class":"c-asset__priceNumber"})
dataskin = html_getter.findAll(attrs={"class" : "c-asset__exterior"})
time.sleep(2)
driver.find_element_by_id("onesignal-slidedown-cancel-button").click()
time.sleep(2)
driver.find_element_by_class_name("c-dialogHeader__close").click()
time.sleep(30)
driver.find_element_by_class_name("c-asset__action--info").click()
time.sleep(30)
price_element = driver.switch_to.active_element
print("<<<<<TEXT>>>>>")
print(price_element.text)
print("<<<<<END>>>>>")
driver.close()
However, when I run this, the only text that prints are "close." If you inspect the information page pop-up, it should print out the price, data from the chart, etc. How can I get it to print this info? Specifically, I want the amount sold on the most recent day and the price listed on the chart on the most recent day (both seem to be accessible in Chrome DevTools). I don't think I'm looking at the wrong frame, as I switch to the active frame, so I'm not sure how to fix this!

BeautifulSoup scraping from a web page already opened by Selenium

I would like to make scrape a web page which was opened by Selenium from a different webpage.
I entered a search term into a website using Selenium and this landed me in a new page. My aim is to create soup out of this new page. But, the soup is getting created out of the previous page where I entered my search term. Help please!
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get('http://www.ratestar.in/')
inputElement = driver.find_element_by_css_selector("#txtStock")
inputElement.send_keys('GM Breweries')
inputElement.send_keys(Keys.ENTER)
driver.wait.until(staleness_of('txtStock')
source = driver.page_source
soup = BeautifulSoup(source)
You need to know the exect company names for your search. After you are using send_keys, you tried to check for staleness of an element. I did not understand how that statement should work. I added WebDriverWait for an element of the new page.
The following works for me reagrding the selenium part up to getting the page source:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
driver = webdriver.Firefox()
driver.get('http://www.ratestar.in/')
inputElement = driver.find_element_by_css_selector("#txtStock")
inputElement.send_keys('GM Breweries Ltd.')
inputElement.send_keys(Keys.ENTER)
company = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, 'lblCompany')))
source = driver.page_source
You should add exception handling.
#Jens Dibbern has given a working solution. But it is not necessary that the exact name of the company should be given in the search. What happens is that when you type a non-exact name, a drop-down will pop up.
I have observed that until and unless this drop-down is present enter key is not working. You can check this by going to the site, pasting the name and without waiting press the enter key as fast as possible. Nothing happens.
You could also wait for this drop-down to be visible instead and the send the enter key.This also works perfectly. Note that this will end up selecting the first item in the drop-down if more than one is present.
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Firefox()
driver.get('http://www.ratestar.in/')
inputElement = driver.find_element_by_css_selector("#txtStock")
inputElement.send_keys('GM Breweries')
drop_down=driver.find_element_by_css_selector("#listPlacementStock")
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR, '#listPlacementStock:not([style*="display: none"])')))
inputElement.send_keys(Keys.ENTER)
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, '//*[#id="CompanyLink"]')))
source = driver.page_source
soup = BeautifulSoup(source,'html.parser')
print(soup)

Unable to submit keys using selenium with python

Following is the code which im trying to run
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import os
import time
#Create a new firefox session
browser=webdriver.Firefox()
browser.maximize_window()
#navigate to app's homepage
browser.get('http://demo.magentocommerce.com/')
#get searchbox and clear and enter details.
browser.find_element_by_css_selector("a[href='/search']").click()
search=browser.find_element_by_class_name('search-input')
search.click()
time.sleep(5)
search.click()
search.send_keys('phones'+Keys.RETURN)
However, im unable to submit the phones using send_keys.
Am i going wrong somewhere?
Secondly is it possible to always use x-path to locate an element and not rely on id/class/css-selections etc ?
The input element you are interested in has the search_query class name. To make it work without using hardcoded time.sleep() delays, use an Explicit Wait and wait for the search input element to be visible before sending keys to it. Working code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
browser = webdriver.Firefox()
browser.maximize_window()
wait = WebDriverWait(browser, 10)
browser.get('http://demo.magentocommerce.com/')
browser.find_element_by_css_selector("a[href='/search']").click()
search = wait.until(EC.visibility_of_element_located((By.CLASS_NAME, "search-query")))
search.send_keys("phones" + Keys.RETURN)

Categories