How can I click confirm window using selenium - python

I want to click confirm window using selenium.
So I tried this code
if driver.find_element_by_class_name('sa-confirm-button-container') == True:
no_map.click()
else:
source = driver.page_source
bs = bs4.BeautifulSoup(source,'lxml')
price_list.append(bs.select('#infoJiga'))
I want to know is there 'sa-confirm-button-container' class in html source.(This means there is confirm window.)
If there is class name in source, I want to click confirm-box.
Can you help me?

Please check
if len(driver.find_elements_by_class_name('sa-confirm-button-container'))>0

if driver.find_element_by_class_name('sa-confirm-button-container') == True
this won't work.
Why not just:
try:
element = driver.find_element_by_class_name('sa-confirm-button-container')
except NoSuchElementException:
print('oops, no element')

Related

I want Selenium python to refresh the page if a certain text is visible in an xpath and to not refresh if another text appears in the xpath

Here is the HTML I'm working with:
<div class="comProductTile__soldOut add8top">Sold Out</div>
I am locating the text in the xpath using the following:
x = wd.find_element_by_xpath('//*[#id="collectionApp"]/div/div/div[10]/a/div[4]/div[3]/div').text
It returns "Sold Out". When that happens I want to refresh the page. However, when the text in the xpath changes to "In Stock", I want the refreshing to stop. How would I accomplish that? Any help would be appreciated. Thanks.
x = wd.find_element_by_xpath('//*[#id="collectionApp"]/div/div/div[10]/a/div[4]/div[3]/div').text
if x = ("Sold Out")
wd.refresh
time.sleep(2)
continue
I tried many iterations similar to this.
It could be done using while loop:
while True:
x = wd.find_element_by_xpath('//*[#id="collectionApp"]/div/div/div[10]/a/div[4]/div[3]/div').text
if x == "Sold out":
wd.refresh()
time.sleep(2)
elif x == "In Stock":
break

Selenium cannot locate element

I am using selenium to create a kahoot bot flooder. (kahoot.it) I am trying to use selenium to locate the input box, as well as the confirm button. Whenever I try to define them as a variable, I get this. "Command raised an exception: TimeoutException: Message:", which I think means that the 5 seconds that I set has expired, meaning that the element was never located.
for idr in tabs:
num+=1
drv.switch_to.window(idr)
time.sleep(0.3)
gameid = WebDriverWait(drv,5).until(EC.presence_of_element_located((By.CLASS_NAME , "sc-bZSQDF bXdUBZ")))
gamebutton = WebDriverWait(drv,5).until(EC.presence_of_element_located((By.CLASS_NAME , "sc-iqHYGH eMQRbB sc-geEHAE kTTBHH")))
gameid.send_keys(gamepin)
gamebutton.click()
time.sleep(0.8)
try:
nick = WebDriverWait(drv,5).until(EC.presence_of_element_located((By.CLASS_NAME , "sc-bZSQDF bXdUBZ")))
nickbutton = WebDriverWait(drv,5).until(EC.presence_of_element_located((By.CLASS_NAME , "sc-iqHYGH eMQRbB sc-ja-dpGc gYusMa")))
nick.send_keys(f'{name}{num - 1}')
nickbutton.click()
except:
I tried locating an "Iframe" which wasn't really successful (might have done it wrong), but I have been searching for hours and haven't found any answers. Any help would be appreciated.
The Class name for the input and button tags have spaces in it.
For input tag you can use the name attribute. and for button tag you can use the tag name since its the only button tag in the DOM.
gameinput = wait.until(EC.presence_of_element_located((By.NAME,"gameId")))
gameinput.send_keys("Sample Text")
submit = wait.until(EC.presence_of_element_located((By.TAG_NAME,"button")))
submit.click()
#It also worked with below line:
gameinput = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,".sc-bZSQDF.bXdUBZ")))

Blocking login overlay window when scraping web page using Selenium

I am trying to scrape a long list of books in 10 web pages. When the loop clicks on next > button for the first time the website displays a login overlay so selenium can not find the target elements.
I have tried all the possible solutions:
Use some chrome options.
Use try-except to click X button on the overlay. But it appears only one time (when clicking next > for the first time). The problem is that when I put this try-except block at the end of while True: loop, it became infinite as I use continue in except as I do not want to break the loop.
Add some popup blocker extensions to Chrome but they do not work when I run the code although I add the extension using options.add_argument('load-extension=' + ExtensionPath).
This is my code:
options = Options()
options.add_argument('start-maximized')
options.add_argument('disable-infobars')
options.add_argument('disable-avfoundation-overlays')
options.add_argument('disable-internal-flash')
options.add_argument('no-proxy-server')
options.add_argument("disable-notifications")
options.add_argument("disable-popup")
Extension = (r'C:\Users\DELL\AppData\Local\Google\Chrome\User Data\Profile 1\Extensions\ifnkdbpmgkdbfklnbfidaackdenlmhgh\1.1.9_0')
options.add_argument('load-extension=' + Extension)
options.add_argument('--disable-overlay-scrollbar')
driver = webdriver.Chrome(options=options)
driver.get('https://www.goodreads.com/list/show/32339._50_?page=')
wait = WebDriverWait(driver, 2)
review_dict = {'title':[], 'author':[],'rating':[]}
html_soup = BeautifulSoup(driver.page_source, 'html.parser')
prod_containers = html_soup.find_all('table', class_ = 'tableList js-dataTooltip')
while True:
table = driver.find_element_by_xpath('//*[#id="all_votes"]/table')
for product in table.find_elements_by_xpath(".//tr"):
for td in product.find_elements_by_xpath('.//td[3]/a'):
title = td.text
review_dict['title'].append(title)
for td in product.find_elements_by_xpath('.//td[3]/span[2]'):
author = td.text
review_dict['author'].append(author)
for td in product.find_elements_by_xpath('.//td[3]/div[1]'):
rating = td.text[0:4]
review_dict['rating'].append(rating)
try:
close = wait.until(EC.element_to_be_clickable((By.XPATH, '/html/body/div[3]/div/div/div[1]/button')))
close.click()
except NoSuchElementException:
continue
try:
element = wait.until(EC.element_to_be_clickable((By.CLASS_NAME, 'next_page')))
element.click()
except TimeoutException:
break
df = pd.DataFrame.from_dict(review_dict)
df
Any help like if I can change the loop to for loop clicks next > button until the end rather than while loop or where should I put try-except block to close the overlay or if there is Chromeoption can disable overlay.
Thanks in advance
Thank you for sharing your code and the website that you are having trouble with. I was able to close the Login Modal by using xpath. I took this challenge and broke up the code using class objects. 1 object is for the selenium.webdriver.chrome.webdriver and the other object is for the page that you wanted to scrape the data against ( https://www.goodreads.com/list/show/32339 ). In the following methods, I used the Javascript return arguments[0].scrollIntoView(); method and was able to scroll to the last book that displayed on the page. After I did that, I was able to click the next button
def scroll_to_element(self, xpath : str):
element = self.chrome_driver.find_element(By.XPATH, xpath)
self.chrome_driver.execute_script("return arguments[0].scrollIntoView();", element)
def get_book_count(self):
return self.chrome_driver.find_elements(By.XPATH, "//div[#id='all_votes']//table[contains(#class, 'tableList')]//tbody//tr").__len__()
def click_next_page(self):
# Scroll to last record and click "next page"
xpath = "//div[#id='all_votes']//table[contains(#class, 'tableList')]//tbody//tr[{0}]".format(self.get_book_count())
self.scroll_to_element(xpath)
self.chrome_driver.find_element(By.XPATH, "//div[#id='all_votes']//div[#class='pagination']//a[#class='next_page']").click()
Once I clicked on the "Next" button, I saw the modal display. I was able to find the xpath for the modal and was able to close the modal.
def is_displayed(self, xpath: str, int = 5):
try:
webElement = DriverWait(self.chrome_driver, int).until(
DriverConditions.presence_of_element_located(locator = (By.XPATH, xpath))
)
return True if webElement != None else False
except:
return False
def is_modal_displayed(self):
return self.is_displayed("//body[#class='modalOpened']")
def close_modal(self):
self.chrome_driver.find_element(By.XPATH, "//div[#class='modal__content']//div[#class='modal__close']").click()
if(self.is_modal_displayed()):
raise Exception("Modal Failed To Close")
I hope this helps you to solve your problem.

how to fix "could not be scrolled into view" error in Selenium Python

I am scrolling an element into view via JavaScript, but when trying to click on that element an exception is being raised which says the element cannot be scrolled into view, but when I look at the browser, it has been scrolled into view. I've even tried waiting for the item to be clickable but the same error is still thrown.
I'd appreciate it if anyone could provide any solutions in python, but java is okay. Thanks you. :)
Here is my code:
for i in range(len(units)):
matchCnt += '0'
for name in className:
if name.lower() in str(units[i].text).lower():
matchCnt[i] = str(int(matchCnt[i]) + 1)
if int(matchCnt[i]) == len(className):
browser.execute_script('return arguments[0].scrollIntoView(true);', units[i])
WebDriverWait(browser, 200).until(EC.element_to_be_clickable((By.CLASS_NAME, classId)))
#element[i].click()
#WebDriverWait(browser, 200).until(webdriver.support.expected_conditions.element_to_be_clickable(units[i]))
#time.sleep(5)
units[i].click()
doesMatch = True
if doesMatch:
break
You can use Javascript to click on the unit, by this way the element will be clicked though not scrolled element into view.
driver.execute_script("arguments[0].click();",unit[i])

How to collect data python using selenium geckodriver

I tried to scrap data from website using selenium firefox and geckodriver.
I want to navigate from one page to another and extract informations. I am unable to move from current page to another, and force driver to back to the original page and move to next element in the list. I created the code bellow which click on first element and go to the specific page to collect data. Thank you
binary = FirefoxBinary('/usr/bin/firefox')
driver = webdriver.Firefox(firefox_binary=binary, executable_path=r'/home/twitter/geckodriver')
try:
driver.get('https://www..........')
list_element = driver.find_elements_by_xpath("//span[#class='icon icon-email']")
for element in list_element :
x = driver.current_url
element.click()
time.sleep(5)
for ex in driver.find_elements_by_xpath('//span[#class = "valeur"]'):
print (ex.text)
driver.back()
except Exception as e:
print (e)
driver.quit()
This might happens because your driver/browser didn't get the new page (current page).
add one line after element.click() or time.sleep(5) :
driver.switch_to.window(driver.current_window_handle)
then try to run your code again.
Hope this helps you! :)

Categories