Why Chrome close without close() or quit()? - python

My ChromeDriver version is 2.22
In my code, there is no quit() or close(), but Chrome browser closes after execution every time.
But if I change webdriver to Firefox, it works well.
My code is
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
def scrapy_list_from_youtube_list(url):
browser = webdriver.Chrome()
browser.get(url)
links = browser.find_elements_by_class_name('pl-video-title-link')
download_list = []
for link in links:
download_list.append(link.get_attribute('href'))
print download_list
i = 0
for download_link in download_list[0:2]:
try:
browser.get('http://www.en.savefrom.net/')
inout = browser.find_element_by_id('sf_url')
inout.send_keys(download_link)
inout.send_keys(Keys.ENTER)
time.sleep(20)
c = browser.find_element_by_link_text('Download')
print i
# print c.get_attribute('href')
c.click()
i = i +1
except Exception as e:
print e
scrapy_list_from_youtube_list('https://www.youtube.com/playlist?list=PLqjtD4kfVG7OFk0vLP1BxUJTmN3-Uj9qM')

I had a similar issue. But my code had the line driver.close() in it. I removed that line and my chrome window didn't close after completion of execution. Try to get a similar workaround.

Related

Selenium scrolling problem - element skipping

I try to scrap my deezer music but, when I scroll the site, selenium skips a lot of music, selenium skips the first 30 music, displays 10, then skips another 30, etc. until the end of the page.
Here is the code:
import selenium
from selenium import webdriver
path = "./chromedriver"
driver = webdriver.Chrome(executable_path=path)
url = 'https://www.deezer.com/fr/playlist/2560242784'
driver.get(url)
for i in range(0,20):
try :
driver.execute_script("window.scrollTo(0, document.body.scrollHeight)")
musics = driver.find_elements_by_class_name('BT3T6')
for music in musics:
print (music.text)
except Exception as e:
print(e)
I've tried to scrape the page based on your code and ended up with success.
I've decided to scroll the page by 500px per step and then remove all duplications and empty strings.
import selenium
import time
from selenium import webdriver
path = "./chromedriver"
driver = webdriver.Chrome(executable_path=path)
url = 'https://www.deezer.com/fr/playlist/2560242784'
driver.get(url)
all_music = []
last_scroll_y = driver.execute_script("return window.scrollY")
for i in range(0, 100):
try :
#first scrape
musics = driver.find_elements_by_class_name('BT3T6')
for music in musics:
all_music.append(music.text)
#then scroll down +500px
driver.execute_script("window.scrollTo(0, window.scrollY+500);")
time.sleep(0.2) #some wait for the new content (200ms)
current_scroll_y = driver.execute_script("return window.scrollY")
# exit the loop if the page is not scrolled any more
if current_scroll_y == last_scroll_y:
break
last_scroll_y = current_scroll_y
except Exception as e:
print(e)
# this removes all empty strings
all_music = list(filter(None, all_music))
# this removes all duplications, but keeps the order
# based on https://stackoverflow.com/a/17016257/5226491
# python 3.7 required
all_music = list(dict.fromkeys(all_music))
# this also removes all duplications, but the order will be changed
#all_music = list(set(all_music))
for m in all_music:
print(m)
print('Total music found: ' + len(all_music))
This works ~ 60-90 seconds and scrape 1000+ items.
Note: it works fine with the active window, and also works in headless mode, but it finish scraping when I collapse the browser window.. So run this with headless chrome option
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = Options()
options.headless = True
driver = webdriver.Chrome(CHROMEDRIVER_PATH, options=options)
or do not collapse the window.

Selenium Chrome (Python): Browser freezes - timeout and quit browser in this case

I use Selenium Chrome to extract information from online sources. Baically, I loop over a list of URLs (stored in mylinks) and load the webpages in the browser as follows:
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
options = Options()
options.add_argument("window-size=1200,800")
browser = webdriver.Chrome(chrome_options=options)
browser.implicitly_wait(30)
for x in mylinks:
try:
browser.get(x)
soup = BeautifulSoup(browser.page_source, "html.parser")
city = soup.find("div", {"class": "city"}).text
except:
continue
My problem is, that the browser "freezes" at some point. I know that this problem is caused by the webpage. As a consequence, my routine stops since the browser does not work any more. Also browser.implicitly_wait(30) does not help here. Neither explicit or implicit wait solves the problem.
I want to "timeout" the problem, meaning that I want to quit() the browser after x seconds (in case the browser freezes) and restart it.
I know that I could use a subprocess with timeout like:
def startprocess(filepath, waitingtime):
p = subprocess.Popen("C://mypath//" + filepath)
try:
p.wait(waitingtime)
except subprocess.TimeoutExpired:
p.kill()
However, for my task this solution would be second-best.
Question: is there an alternative way to timeout the browser.get(x) step in the loop above (in case the browser freezes) and to continue to the next step?

Why can't I drag and drop in Selenium Chromedriver with Python?

I can't drag and drop in Selenium with the latest Chromedriver.
selenium='3.141.0'
python 3.7
Chrome = 74.0.3729.169
ChromeDriver =latest
The below code executed successfully, but the items are not being dragged from source to destination. I am also not getting any error at all. I tried all of the below solutions, one by one, but none of working them are at all.
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
import time
cd = webdriver.Chrome('Chromedriver.exe')
cd.get('https://www.seleniumeasy.com/test/drag-and-drop-demo.html')
cd.maximize_window()
elements = cd.find_element_by_id('todrag')
drag_item = elements.find_elements_by_tag_name('span')
drag_to = cd.find_element_by_id('mydropzone')
# Solution 1 (not working)
for i in drag_item:
action = ActionChains(cd)
action.drag_and_drop(i, drag_to).perform() # this is not working
# Solution 2 (not working)
ActionChains(cd).click_and_hold(i).move_to_element(drag_to).release(
drag_to).perform()
# Solution 3 (not working, as you need to download the js files)
jquery_url = "http://code.jquery.com/jquery-1.11.2.min.js"
with open("jquery_load_helper.js") as f:
load_jquery_js = f.read()
with open("drag_and_drop_helper.js") as f:
js = f.read()
cd.execute_async_script(load_jquery_js, jquery_url)
cd.execute_script(js + "$(\'arguments[0]\').simulateDragDrop({ dropTarget: \"arguments[1]\"});", i, drag_to)
I think there is something wrong with the site because this example I found on the web seems to work:
import time
from selenium import webdriver
from selenium.webdriver import ActionChains
# Create chrome driver.
driver = webdriver.Chrome()
# Open the webpage.
driver.get("https://openwritings.net/sites/default/files/selenium-test-pages/drag-drop.html")
# Pause for 5 seconds for you to see the initial state.
time.sleep(5)
# Drag and drop to target item.
##################################
drag_item = driver.find_element_by_id("draggable")
target_item = driver.find_element_by_id("droppable")
action_chains = ActionChains(driver)
action_chains.drag_and_drop(drag_item, target_item).perform()
##################################
# Pause for 10 seconds so that you can see the results.
time.sleep(10)
# Close.
driver.quit()
Hopefully, that example helped you!

ubuntu python selenium -- close terminal

I'm doing some testing on a Ubuntu terminal (14.04) using python (2.7) and selenium. I have created code that will open a browser, enter username and password information into the respective fields, and keep the browser open for one hour. The problem is, a python terminal is opened when code is run. When the browser is manually closed, the blank terminal remains. How can I get the terminal to disappear when the browser is manually closed?
Code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
import sys
driver = webdriver.Firefox()
driver.maximize_window()
driver.get("http://example.com")
inputElement = driver.find_element_by_id("username")
inputElement.send_keys('...')
inputElement = driver.find_element_by_name("password")
inputElement.send_keys('...')
inputElement.send_keys(Keys.ENTER)
while 1:
time.sleep(3600)
try:
b = browser.find_by_tag("body")
except:
sys.exit()
I should mention that I'm executing this script using a .desktop file.
you can ping the browser once every second during the "sleep" time
i = 0
while i < 3600:
browser.title
time.sleep(1)
i += 1
b = browser.find_by_tag("body")

How to close all windows that Selenium opens?

I am using Selenium RC to do some test now. And the driver I use is python.
But now, I faced a problem, that is: every time Selenium RC runs, and open a url, it opens 2 windows, one is for logging and the other one is for showing HTML content. But I can't close them all in script.
Here is my script:
#!/usr/bin/env python
#-*-coding:utf-8-*-
from selenium import selenium
def main():
sel = selenium('localhost', 4444, '*firefox', 'http://www.sina.com.cn/')
sel.start()
try:
sel.open('http://www.sina.com.cn/')
except Exception, e:
print e
else:
print sel.get_title()
sel.close()
sel.stop()
if __name__ == '__main__':
main()
It is very easy to understand. What I really want is to close all windows that selenium opens. I've tried close() and stop(), but they all don't work.
I had a similar case where my program opened many windows when scraping a webpage. here is a sample code:
#!/usr/bin/python
import webbrowser
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import NoSuchElementException
driver = webdriver.Firefox()
print "Browser fired-up!"
driver.get("https://www.something.com/")
driver.implicitly_wait(5)
while True:
try:
playlink = driver.find_element_by_xpath("/html/body/div[2]/div[1]/div/a")
playlink.click()
time.sleep(3)
except NoSuchElementException:
print "playlink Element not found "
else:
backbutton = driver.find_element_by_id("back-to-bing-text")
backbutton.click()
try:
quizlink = driver.find_element_by_xpath("/html/body/div[2]/div[1]/div[1]/ul/li[1]/a/span/span[1]")
quizlink.click()
except NoSuchElementException:
print "quiz1 Element not found "
else:
print "quiz1 clicked"
driver.quit()
The "driver.close()" bugged me for a week as I believed it would close all the windows. "driver.quit()" is to terminate all the process and close all the windows.
I've fix this problem.
It happens because I installed firefox-bin not firefox.
Now I've removed firefox-bin and have installed firefox, it works now.
stop() will close all windows that selenium opened.
Thank you for your reminds AutomatedTester
I may suggest to make a system command with python to close the firefox windows
Bussiere

Categories