I am using Selenium RC to do some test now. And the driver I use is python.
But now, I faced a problem, that is: every time Selenium RC runs, and open a url, it opens 2 windows, one is for logging and the other one is for showing HTML content. But I can't close them all in script.
Here is my script:
#!/usr/bin/env python
#-*-coding:utf-8-*-
from selenium import selenium
def main():
sel = selenium('localhost', 4444, '*firefox', 'http://www.sina.com.cn/')
sel.start()
try:
sel.open('http://www.sina.com.cn/')
except Exception, e:
print e
else:
print sel.get_title()
sel.close()
sel.stop()
if __name__ == '__main__':
main()
It is very easy to understand. What I really want is to close all windows that selenium opens. I've tried close() and stop(), but they all don't work.
I had a similar case where my program opened many windows when scraping a webpage. here is a sample code:
#!/usr/bin/python
import webbrowser
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import NoSuchElementException
driver = webdriver.Firefox()
print "Browser fired-up!"
driver.get("https://www.something.com/")
driver.implicitly_wait(5)
while True:
try:
playlink = driver.find_element_by_xpath("/html/body/div[2]/div[1]/div/a")
playlink.click()
time.sleep(3)
except NoSuchElementException:
print "playlink Element not found "
else:
backbutton = driver.find_element_by_id("back-to-bing-text")
backbutton.click()
try:
quizlink = driver.find_element_by_xpath("/html/body/div[2]/div[1]/div[1]/ul/li[1]/a/span/span[1]")
quizlink.click()
except NoSuchElementException:
print "quiz1 Element not found "
else:
print "quiz1 clicked"
driver.quit()
The "driver.close()" bugged me for a week as I believed it would close all the windows. "driver.quit()" is to terminate all the process and close all the windows.
I've fix this problem.
It happens because I installed firefox-bin not firefox.
Now I've removed firefox-bin and have installed firefox, it works now.
stop() will close all windows that selenium opened.
Thank you for your reminds AutomatedTester
I may suggest to make a system command with python to close the firefox windows
Bussiere
Related
first link was just a selenium script for access to youtube
from selenium import webdriver
PATH = r"C:\Users\hp\Desktop\prog\chromedriver.exe"
driver = webdriver.Chrome(PATH)
driver.get("https://youtube.com")
this python program works perfectly fine
but when i try the same thing with a cloudflare protected website it gets stuck in the wait page
i did some research and found an undetected chrome driver to use but i keep getting errors like :
RuntimeError, the libraris are all perfectly installed
did more research and found a youtube video that i could follow but Im still getting errors
here is the second code
import selenium
import undetected_chromedriver.v2 as uc
import time
options = uc.ChromeOptions()
py = "24.172.82.94:53281"
options.add_argument('--proxy-server=%s' % py)
driver = uc.Chrome(options=options)
driver.get("https://ifconfig.me/")
time.sleep(4)
the error i get : AttributeError: 'ChromeOptions' object has no attribute 'add'
For something related to processes, wich i connot fully understand, you have to the lines driver = uc.Chrome() and driver.get('https://namecheap.com') in the if __name__ == '__main__':.
Also the link has to have the https:// or http:// for it to work
Here's my working code:
import undetected_chromedriver as uc # the .v2 is not necessary in the last version of undetected_chromedriver
import time
def main():
time.sleep(4)
#continue your code here
if __name__ == '__main__':
driver = uc.Chrome()
driver.get('https://namecheap.com')
main()
I am trying to parse a table from https://www.morningstar.de/de/screener/fund.aspx#?filtersSelectedValue=%7B%22sustainabilityRating%22:%7B%22id%22:%225%22%7D%7D&page=1&perPage=10&sortField=legalName&sortOrder=asc.
However, by opening the website with selenium I always get at first a pop-up, to close which I need to select type of user (radiobutton) and then click on "accept" button .
After I proceed these "clicks" with the help of python and selenium, the pop-up doesn't disappear sbut I can see that clicks were proceeded. It doesn't show any error (all the needed fields are selected and python script also doesn't throw anything).
Here is my code:
from selenium import webdriver
import time
browser = webdriver.Firefox()
url="https://www.morningstar.de/de/screener/fund.aspx#?filtersSelectedValue=%7B%22sustainabilityRating%22:%7B%22id%22:%225%22%7D%7D&page=1&perPage=10&sortField=legalName&sortOrder=asc"
browser.get(url)
time.sleep(10)
try:
radio_button = browser.find_elements_by_xpath('/html/body/div[2]/div[3]/div/div[2]/div/div[3]/div[1]/div[1]/fieldset/div[2]/label/span/span[1]')[0]
radio_button.click()
time.sleep(3)
accept_button=browser.find_element_by_id('_evidon-accept-button')
accept_button.click()
print("accepted")
except:
print(" something went wrong")
I need to close this pop-up in order to get access to the table, what am I doing wrong?
Example for radio button #finaprofessional > span:nth-child(1) or #finaprofessional > span:nth-child(2)
import time
from selenium import webdriver
def example():
firefox_browser = webdriver.Firefox()
firefox_browser.get("https://www.morningstar.de/de/screener/fund.aspx#?filtersSelectedValue=%7B%22sustainabilityRating%22:%7B%22id%22:%225%22%7D%7D&page=1&perPage=10&sortField=legalName&sortOrder=asc")
time.sleep(10) # wait for page to load
radio_button = firefox_browser.find_element_by_css_selector("#finaprofessional > span:nth-child(1)")
radio_button.click()
time.sleep(10)
accept_button = firefox_browser.find_element_by_id("_evidon-accept-button")
accept_button.click()
if __name__ == "__main__":
example()
I am using selenium + python, been using implicit waits and try/except code on python to catch errors. However I have been noticing that if the browser crashes (let's say the user closes the browser during the program's executing), my python program will hang, and the timeouts of implicit wait seems to not work when this happens. The below process will just stay there forever.
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium import webdriver
import datetime
import time
import sys
import os
def open_browser():
print "Opening web page..."
driver = webdriver.Chrome()
driver.implicitly_wait(1)
#driver.set_page_load_timeout(30)
return driver
driver = open_browser() # Opens web browser
# LET'S SAY I CLOSE THE BROWSER RIGHT HERE!
# IF I CLOSE THE PROCESS HERE, THE PROGRAM WILL HANG FOREVER
time.sleep(5)
while True:
try:
driver.get('http://www.google.com')
break
except:
driver.quit()
driver = open_browser()
The code you have provided will always hang in the event that there is an exception getting the google home page.
What is probably happening is that attempting to get the google home page is resulting in an exception which would normally halt the program, but you are masking that out with the except clause.
Attempt with the following amendment to your loop.
max_attemtps = 10
attempts = 0
while attempts <= max_attempts:
try:
print "Retrieving google"
driver.get('http://www.google.com')
break
except:
print "Retrieving google failed"
attempts += 1
My ChromeDriver version is 2.22
In my code, there is no quit() or close(), but Chrome browser closes after execution every time.
But if I change webdriver to Firefox, it works well.
My code is
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
def scrapy_list_from_youtube_list(url):
browser = webdriver.Chrome()
browser.get(url)
links = browser.find_elements_by_class_name('pl-video-title-link')
download_list = []
for link in links:
download_list.append(link.get_attribute('href'))
print download_list
i = 0
for download_link in download_list[0:2]:
try:
browser.get('http://www.en.savefrom.net/')
inout = browser.find_element_by_id('sf_url')
inout.send_keys(download_link)
inout.send_keys(Keys.ENTER)
time.sleep(20)
c = browser.find_element_by_link_text('Download')
print i
# print c.get_attribute('href')
c.click()
i = i +1
except Exception as e:
print e
scrapy_list_from_youtube_list('https://www.youtube.com/playlist?list=PLqjtD4kfVG7OFk0vLP1BxUJTmN3-Uj9qM')
I had a similar issue. But my code had the line driver.close() in it. I removed that line and my chrome window didn't close after completion of execution. Try to get a similar workaround.
I'm doing some testing on a Ubuntu terminal (14.04) using python (2.7) and selenium. I have created code that will open a browser, enter username and password information into the respective fields, and keep the browser open for one hour. The problem is, a python terminal is opened when code is run. When the browser is manually closed, the blank terminal remains. How can I get the terminal to disappear when the browser is manually closed?
Code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
import sys
driver = webdriver.Firefox()
driver.maximize_window()
driver.get("http://example.com")
inputElement = driver.find_element_by_id("username")
inputElement.send_keys('...')
inputElement = driver.find_element_by_name("password")
inputElement.send_keys('...')
inputElement.send_keys(Keys.ENTER)
while 1:
time.sleep(3600)
try:
b = browser.find_by_tag("body")
except:
sys.exit()
I should mention that I'm executing this script using a .desktop file.
you can ping the browser once every second during the "sleep" time
i = 0
while i < 3600:
browser.title
time.sleep(1)
i += 1
b = browser.find_by_tag("body")