Unable to close a pop up using Selenium with Python - python

So here is my code so far that isn't faring to well:
url = 'https://americanbarbell.com/products/american-barbell-cast-kettlebell'
path = "C:\Program Files (x86)\msedgedriver.exe"
driver = webdriver.Edge(path)
driver.get(url)
time.sleep(5)
driver.find_element(By.XPATH,'//*[#id="closeIconSvg"]').click()
But I keep getting this back:
Message: no such element: Unable to locate element: {"method":"xpath","selector":"//*[#id="closeIconSvg"]"}
I'm not seeing anything about an iFrame? I saw a lot of other people with this problem but have yet to find a working solution.
Thanks in advance

Switch to the modal window and wait until it loads.
iframe = driver.find_element(By.XPATH, '//*[#id="attentive_creative"]')
driver.switch_to.frame(iframe)
wait = WebDriverWait(driver, 10)
status_message = wait.until(
EC.visibility_of_element_located((By.CSS_SELECTOR, "#closeIconContainer"))
driver.find_element(By.ID, "closeIconContainer").click()
)
You will need to import a couple of methods.
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

A super simple work around ended up being
driver.refresh()
Thanks!

Pls, show line where this id is
and probably you may use such solution:
url = 'https://americanbarbell.com/products/american-barbell-cast-kettlebell'
path = "C:\Program Files (x86)\msedgedriver.exe"
driver = webdriver.Edge(path)
driver.get(url)
time.sleep(5)
try:
element = driver.find_element(By.XPATH,'//*[#id="closeIconSvg"]')
element.click()
except NoSuchElementException: # Error catch here
print('Error.')

Related

I keep getting the error: Unable to locate element: {"method":"css selector","selector":".user-profile-link"}

I keep getting the error: Unable to locate element: {"method":"css selector","selector":".user-profile-link"} - Everythin works Ok except for this error and I have tried to search for a solution with no success. Please help. Note that in the code I pasted below, I replaced my real username and password with "my_github_username" and "my_github_password" respectively.
enter code here
from selenium import webdriver
from selenium.webdriver.common.by import By
browser = webdriver.Chrome()
browser.get("https://github.com/")
browser.maximize_window()
signin_link = browser.find_element(By.PARTIAL_LINK_TEXT, "Sign in")
signin_link.click()
username_box = browser.find_element(By.ID, "login_field")
username_box.send_keys("my_github_username")
password_box = browser.find_element(By.ID, "password")
password_box.send_keys("my_github_password")
password_box.submit()
profile_link = browser.find_element(By.CLASS_NAME, "user-profile-link")
link_label = profile_link.get_attribute("innerHTML")
assert "my_github_username" in link_label
browser.quit()
Can you print out the HTML by using the print(driver.page_source), so it is easier for people to help you investigate and you would get more efficient support?
I believe the potential root cause would be that the portion of your code gets executed before the element that you seek finishes loading.
Perhaps, try adding time.sleep(5) or if you would want to be more advance, you can use WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
browser = webdriver.Chrome()
browser.get("https://github.com/")
browser.maximize_window()
signin_link = browser.find_element(By.PARTIAL_LINK_TEXT, "Sign in")
signin_link.click()
# add some wait time here by using either `time.sleep(5)` or WebDriverWait
# time.sleep(5)
wait = WebDriverWait(browser, 10)
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, '#login_field')))
username_box = browser.find_element(By.ID, "login_field")
username_box.send_keys("my_github_username")
password_box = browser.find_element(By.ID, "password")
password_box.send_keys("my_github_password")
password_box.submit()
# add some wait time here by using either `time.sleep(5)` or WebDriverWait
# time.sleep(5)
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, '.user-profile-link')))
profile_link = browser.find_element(By.CLASS_NAME, "user-profile-link")
link_label = profile_link.get_attribute("innerHTML")
assert "my_github_username" in link_label
browser.quit()

Scrape data from dynamic table using Python & Selenium

I'm trying to scrape data from this URL : https://qmjhldraft.rinknet.com/results.htm?year=2018, but I can't seem to be even able to scrape one single name from the dynamic table.
Here's the code that I currently have :
from selenium import webdriver
PATH = 'C:\Program Files (x86)\chromedriver.exe'
driver = webdriver.Chrome(PATH)
driver.get('https://qmjhldraft.rinknet.com/results.htm?year=2018')
element = driver.find_element_by_xpath('//*[#id="ht-results-table"]/tbody[1]/tr[2]/td[4]').text
print(element)
The code gives me this error :
NoSuchElementException: no such element: Unable to locate element: {"method":"xpath","selector":"//*[#id="ht-results-table"]/tbody[1]/tr[2]/td[4]"}
There's obviously something wrong with my XPath, but I can't figure out what.
Thanks a lot for the help!
the first problem is as if the website is loading dynamically you need to give some time to load the page fully. to solve it you can use this
time.sleep(2)
// change the number according to your need.
element = driver.find_element_by_xpath('//*[#id="ht-results-table"]/tbody[1]/tr[2]/td[4]').text
the best way is using Explicit Waits. this will wait for the element to load then execute the next step.
2nd problem is you shouldn't just copy the XPath from the chrome dev tools
to get all the names you can do this
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
PATH = 'C:\Program Files (x86)\chromedriver.exe'
driver = webdriver.Chrome(PATH)
driver.get('https://qmjhldraft.rinknet.com/results.htm?year=2018')
try:
elements = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.XPATH, "//tr[#rnid]/td[3]"))
)
finally:
names = driver.find_elements_by_xpath('//tr[#rnid]/td[3]')
for name in names:
nm = name.text
print(nm)
driver.quit()

Selenium does NOT do anything after getting rid of GDPR consent

It's my first time using Selenium and webscraping. I have been stuck with the annoying GDPR iframe. I am simply trying to go to a website, search something in the search bar and then click in one of the results. But it does not seem to do anything after I get rid of the GDPR consent.
Important, it does not give any errors.
This is my very simple code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
#Web driver
driver = webdriver.Chrome(executable_path="C:\Program Files (x86)\chromedriver.exe")
driver.get("https://transfermarkt.co.uk/")
search = driver.find_element_by_name("query")
search.send_keys("Sevilla FC")
search.send_keys(Keys.RETURN)
WebDriverWait(driver,10).until(EC.frame_to_be_available_and_switch_to_it((By.ID, "sp_message_iframe_382445")))
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, "//button[text()='ACCEPT ALL']"))).click()
try:
sevfclink = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "368")))
sevfclink.click()
except:
driver.quit()
time.sleep(5)
driver.quit()
Not sure where you get the iframe from but the id might be dynamic so try this.
driver.get("https://transfermarkt.co.uk/")
wait = WebDriverWait(driver,10)
search = wait.until(EC.element_to_be_clickable((By.NAME, "query")))
search.send_keys("Sevilla FC", Keys.RETURN)
wait.until(EC.frame_to_be_available_and_switch_to_it((By.CSS_SELECTOR,"iframe[id^='sp_message_iframe']")))
wait.until(EC.element_to_be_clickable((By.XPATH, "//button[text()='ACCEPT ALL']"))).click()
driver.switch_to.default_content()
try:
sevfclink = wait.until(EC.element_to_be_clickable((By.ID, "368")))
sevfclink.click()
except:
pass
It looks like the two lines starting WebDriverWait throw an error. If I skip those to the try statement you get the results of the search. A page that gives an overview of Sevilla FC shows up. I presume the WebDriverWait lines are there to make sure you wait for something, but from what I can tell they are unnecessary.

TimeoutException(message, screen, stacktrace) selenium.common.exceptions.TimeoutException: Message:

I use this line code to wait until product_id appear, My code works well for about 30 minutes
element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, product_id)))
And then, I get an error
TimeoutException(message, screen, stacktrace)selenium.common.exceptions.TimeoutException: Message:
I tried to change WebDriverWait to 100, but I still can't solve them. I want to understand why to appear this error and any solution for this case. Thanks so much !!!
This is my solution, but I want to know the cause of this error and looking for a better solution
while True:
driver.get(URL)
try:
element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, product_id)))
break
except TimeoutException:
driver.quit()
Can you provide HTML example? Without HTML example it's nearly impossible to debug for you. Code & HTML example will be helpful.
My guess is that your website uses dynamic id locators, or using iframe, or in your script, selenium is active on other tab/window/iframe/content. This is assuming your product_id is correct.
Update with the example:
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
PRODUCT_ID = 'productTitle'
driver = webdriver.Chrome()
driver.get("https://www.amazon.com/dp/B08BB9RWXD")
try:
element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, PRODUCT_ID)))
print(element.text)
except TimeoutException:
print("Cannot find product title.")
driver.quit()
I don't have any problem using above code. You can extend it by youself :).
One thing that you need to note is that you DO NOT use while & break. WebDriverWait(driver, 10) already does the loop to find your element for 10 seconds. In this 10 seconds, if it finds your element, it will stop finding (break). You don't need to do the while loop yourself.

How to select element trough xpath in Selenium and Python?

I want to check the innertext of a web element, but xpath does not find it even if i gave it the absolute path. I get no such element error on the line where i try to define Plaje
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Edge("D:\pariuri\python\MicrosoftWebDriver.exe")
driver.get("https://www.unibet.ro/betting#filter/all/all/all/all/in-play")
try:
element_present = EC.presence_of_element_located((By.CLASS_NAME, 'KambiBC-event-result__score-list'))
WebDriverWait(driver, 4).until(element_present)
except TimeoutException:
print ('Timed out waiting for page to load')
event = driver.find_elements_by_class_name('KambiBC-event-item KambiBC-event-item--type-match')
for items in event:
link = items.find_element_by_class_name('KambiBC-event-item__link')
scoruri = items.find_element_by_class_name('KambiBC-event-item__score-container')
scor1 = scoruri.find_element_by_xpath(".//li[#class='KambiBC-event-result__match']/span[1]")
scor2 = scoruri.find_element_by_xpath(".//li[#class='KambiBC-event-result__match']/span[2]")
print (scor1.text)
print (scor2.text)
if scor1.text == '0' and scor2.text == '0':
link.click()
Plaje = driver.find_element_by_xpath(".//*[#id='KambiBC-contentWrapper__bottom']/div/div/div/div/div[2]/div[2]/div[3]/div/div/div[4]/div[1]/div[4]/div[2]/div/div/ul/li[2]/ul/li[6]/div[1]/h3")
print (Plaje.text)
Always add some implicit wait after initiating the webdriver.
driver = webdriver.Firefox()
driver.implicitly_wait(10) # seconds
Try with the below xpath.
"//h3[contains(text(),'Total goluri']"
or
"//div[#class='KambiBC-bet-offer-subcategory__container']/h3[1]"
Hope this helps. Thanks.
EDIT: Its always advisable to use the implicit wait. We can handle the same using the explicit wait also. But we need to add the explicit wait for each and every element. Also there is a good change you might miss adding explicit wait to few elements and debug again. The choice is yours always.
try using .get_attriute("value") or .get_attribute("innerHTML") instead of .text
Try below :
//h3[contains(.,'Total goluri')]
hope it will help you :)

Categories