Click “Accept Cookies” popup with Selenium in Python - python

I've been trying to scrape some information of this E-commerce website with selenium. However when I access the website I need to accept cookies to continue. This only happens when the bot accesses the website, not when I do it manually. When I try to find the corresponding element either by xpath, as I find it when I inspect the page manually I always get this error message:
selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
My code is mentined below.
import time
import pandas
pip install selenium
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.common.exceptions import TimeoutException
from bs4 import BeautifulSoup #pip install beautifulsoup4
PATH = "/Users/Ziye/Desktop/Python/chromedriver"
delay = 15
driver = webdriver.Chrome(PATH)
driver.implicitly_wait(10)
driver.get("https://en.zalando.de/women/?q=michael+michael+kors+taschen")
driver.find_element_by_xpath('//*[#id="uc-btn-accept-banner"]').click()
This is the HTML corresponding to the button "That's ok". The XPATH is as above.
<button aria-label="" id="uc-btn-accept-banner" class="uc-btn uc-btn-primary">
That’s OK <span id="uc-optin-timer-display"></span>
</button>
Does anyone know where my mistake lies?

You should add explicit wait for this button:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
driver = webdriver.Chrome(executable_path='/snap/bin/chromium.chromedriver')
driver.implicitly_wait(10)
driver.get("https://en.zalando.de/women/?q=michael+michael+kors+taschen")
wait = WebDriverWait(driver, 15)
wait.until(EC.element_to_be_clickable((By.XPATH, '//*[#id="uc-btn-accept-banner"]')))
driver.find_element_by_xpath('//*[#id="uc-btn-accept-banner"]').click()
Your locator is correct.
As css selector, you can use .uc-btn-footer-container .uc-btn.uc-btn-primary

Related

Creating script to search webpage [duplicate]

I was trying to access the search bar of this website: https://www.kissanime.ru
using selenium. I tried it using xpath, class, css_selector but every time this error pops up in the terminal.
selenium.common.exceptions.NoSuchElementException: Message: Unable to
locate element: //*[#id="keyword"]
My approach to the problem was :
from selenium import webdriver
from selenium.webdriver.support.select import Select
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
driver=webdriver.Firefox()
driver.get("https://kissanime.ru/")
driver.maximize_window()
search=driver.find_element_by_xpath('//*[#id="keyword"]')
search.send_keys("boruto")
search.send_keys(Keys.RETURN)
Try adding some wait
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions
search = WebDriverWait(driver, 10).until(expected_conditions.visibility_of_element_located((By.ID, "keyword")))
Add wait to avoid race condition
driver.implicitly_wait(20)

Click on title to scrape data using selenium and scrapy

from scrapy_selenium import SeleniumRequest
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.select import Select
from selenium import webdriver
url='https://www.aeafa.es/asociados.php?provinput='
driver =webdriver.Chrome('C:\Program Files (x86)\chromedriver.exe')
driver.get(url)
wait = WebDriverWait(driver, 30)
detail=wait.until(EC.element_to_be_clickable((By.XPATH, "//tbody//td[6]")))
detail.click()
Error will be in these:
detail=wait.until(EC.element_to_be_clickable((By.XPATH, "//tbody//td[6]")))
detail.click()
I want that they click on the title of these is page link
and then there will scrape these information. How to scrape these data.
You cannot locate HTML attribute to click on it. Try to replace
detail=wait.until(EC.element_to_be_clickable((By.XPATH, "//td//img//#src"))).click()
with
detail = wait.until(EC.element_to_be_clickable((By.XPATH, "//a[#title='info']")))
detail.click()
UPDATE
Since your ScrapySelenium approach doeasn't seem to work, try common Selenium approach. Then you could adapt it to Scrapy
from selenium import webdriver
driver = webdriver.Chrome()
url = 'https://www.aeafa.es/asociados.php?provinput='
driver.get(url)
for mail in driver.find_elements("xpath", "//p/a[starts-with(#href, 'mailto')]"):
print(mail.get_attribute('textContent'))
Note that you don't need to open each details popup to get required email text

Python Selenium ElementClickInterceptedException

I already tried several methods to click on a link on a specific website, with the help of Selenium. All of them resulting in following error message:
ElementClickInterceptedException: Message: element click intercepted: Element LMGP06050001 is not clickable at point (159, 364). Other element would receive the click: ...
(Session info: chrome=89.0.4389.90)
What am I doing wrong?
The goal is to reach following site and grab several data from there:
https://www.lipidmaps.org/data/LMSDRecord.php?LMID=LMGP06050001
Below my code so far:
from time import sleep
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
PATH = "C:\\Users\\xxxxxx\\anaconda3\\chromedriver.exe"
browser = webdriver.Chrome(PATH)
browser.get("https://www.lipidmaps.org/data/structure/LMSDSearch.php?Mode=ProcessClassSearch&LMID=LMGP0605")
link = browser.find_element_by_link_text("LMGP06050001")
browser.implicitly_wait(5)
link.click()
Reason you have faced that issue because cookie pop up appeared and you need to accept the cookie first.
Use WebDriverWait() and wait for element_to_be_clickable()
browser.get("https://www.lipidmaps.org/data/structure/LMSDSearch.php?Mode=ProcessClassSearch&LMID=LMGP0605")
#Accept to cookie
WebDriverWait(browser,20).until(EC.element_to_be_clickable((By.CSS_SELECTOR,"button#cookie_notice_accept"))).click()
WebDriverWait(browser,20).until(EC.element_to_be_clickable((By.LINK_TEXT,"LMGP06050001"))).click()
you need to import below libraries
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By

Not Able to See Entire Page Source of Website using Selenium,Python

I am trying to scrape this website
https://script.google.com/a/macros/cprindia.org/s/AKfycbzgfCVNciFRpcpo8P7joP1wTeymj9haAQnNEkNJJ2fQ4FBXEco/exec
I am using selenium and python.I am not able to view entire page source,Basically i have to scrape the table inside it and click on next button,but the code of next and table not visible on page source.Here is my code
from selenium import webdriver
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from bs4 import BeautifulSoup
from selenium import webdriver
browser = webdriver.PhantomJS()
browser.get(link)
pass1 = browser.find_element_by_xpath("/html/body/div[2]/table[2]/tbody/tr[1]/td/div/div/div[2]/div[2]")
pass1.click()
time.sleep(30)
I am getting this error,NoSuchElementException.
There are two iframes present on the page, so you need to first switch on those iframe and then you need to click on the element.
And you can apply explicit wait on the element so that the script waits until the element is visible on the page.
You can do it like:
browser = webdriver.PhantomJS()
browser.get(link)
browser.switch_to.frame(driver.find_element_by_id('sandboxFrame'))
browser.switch_to.frame(driver.find_element_by_id('userHtmlFrame'))
WebDriverWait(browser, 20).until(EC.presence_of_element_located((By.XPATH, "//div[contains(#class,'charts-custom-button-collapse-left')]//div"))).click()
Note: You have to add the following imports:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

why is this code showing NoSuchElementException error? I checked Chrome DOM my XPATH able to find the destinated tag

why is this code showing NoSuchElementException error? I checked Chrome DOM my XPATH able to find the destinated tag.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.common.exceptions import NoSuchElementException
class Firefox():
def test(self):
base_url='https://oakliquorcabinet.com/'
driver = webdriver.Chrome(executable_path=r'C:\Users\Vicky\Downloads\chromedriver')
driver.get(base_url)
search=driver.find_element(By.XPATH,'//div[#class="box-footer"]/button[2]')
search.click()
ff=Firefox()
ff.test()
Selenium by default waits for the DOM to load and tries to find the element. But, the confirmation pop up becomes visible after some time the main page is loaded.
Use explicit wait to fix this issue.
add these imports:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions
change line in script:
search = WebDriverWait(driver, 10).until(expected_conditions.presence_of_element_located((By.XPATH, '//div[#class="box-footer"]/button[2]')))

Categories