Selenium create multiple browser sessions - python

I want to create multiple browser session and login with different accounts. If I use the code below it makes what I want but close all browser after the for loop ends. My guess is that python ends all processes after the focus is gone. How can I solve the problem? With multithreading?
I want that every session to stay open for 60 seconds.
def playroutine():
index = 0
for i in range(len(getlogindata())):
username, password = givemelogin(index)
index += 1
driver = webdriver.Chrome('/Users/fb/Documents/chromedriver') # Optional argument, if not specified will search path.
driver.get('[...]')
driver.find_element_by_name("username").send_keys(username)
driver.find_element_by_name("password").send_keys(password)
driver.find_element_by_id("login-button").click()
time.sleep(2)
driver.get('[...]')
Thanks :)

You can't close all browsers after your loop ends, because the driver variable only exists in the context of your for loop.
You can however, close the drivers inside the loop, one at a time:
def playroutine():
index = 0
for i in range(len(getlogindata())):
username, password = givemelogin(index)
index += 1
driver = webdriver.Chrome('/Users/fb/Documents/chromedriver') # Optional argument, if not specified will search path.
driver.get('[...]')
driver.find_element_by_name("username").send_keys(username)
driver.find_element_by_name("password").send_keys(password)
driver.find_element_by_id("login-button").click()
time.sleep(2)
# close the driver
driver.close()
driver.quit()
Or, you can keep track of the drivers in a list, and try to loop through them and close them -- this is a bit hacky, and I can't say I would recommend it:
def playroutine():
driver_list = []
index = 0
for i in range(len(getlogindata())):
username, password = givemelogin(index)
index += 1
driver = webdriver.Chrome('/Users/fb/Documents/chromedriver') # Optional argument, if not specified will search path.
# add this driver to your list to keep track of it
driver_list.append(driver)
driver.get('[...]')
driver.find_element_by_name("username").send_keys(username)
driver.find_element_by_name("password").send_keys(password)
driver.find_element_by_id("login-button").click()
time.sleep(2)
driver.get('[...]')
# for loop is finished -- close all drivers
for driver in driver_list:
driver.close()
driver.quit()

Related

Python undetectable_webdriver won't open in a loop

I am trying to open a site multiple times in a loop to test if different credentials have expired so that I can notify our users. I'm achieving this by opening the database, getting the records, calling the chrome driver to open the site, and inputting the values into the site. The first loop works but when the next one initiates the driver hangs and eventually outputs the error:
"unknown error: cannot connect to chrome at 127.0.0.1:XXXX from chrome not reachable"
This error commonly occurs when there is already an instance running. I have tried to prevent this by using both driver.close() and driver.quit() when the first loop is done but to no avail. I have taken care of all other possibilities of detection such as using proxies, different user agents, and also using the undetected_chromedriver by https://github.com/ultrafunkamsterdam/undetected-chromedriver.
The core issue I am looking to solve is being able to open an instance of the chrome driver, close it and open it back again all in the same execution loop until all the credentials I am testing have finished. I have abstracted the code and provided an isolated version that replicates the issue:
# INSTALL CHROMDRIVER USING "pip install undetected-chromedriver"
import undetected_chromedriver.v2 as uc
# Python Libraries
import time
options = uc.ChromeOptions()
options.add_argument('--no-first-run')
driver = uc.Chrome(options=options)
length = 8
count = 0
if count < length:
print("Im outside the loop")
while count < length:
print("This is loop ",count)
time.sleep(2)
with driver:
print("Im inside the loop")
count =+ 1
driver.get("https://google.com")
time.sleep(5)
print("Im at the end of the loop")
driver.quit() # Used to exit the browser, and end the session
# driver.close() # Only closes the window in focus
I recommend using a python virtualenv to keep packages consistent. I am using python3.9 on a Linux machine. Any solutions, suggestions, or workarounds would be greatly appreciated.
You are quitting your driver in the loop and then trying to access the executor address, which no longer exists, hence your error. You need to reinitialize the driver by moving it down within the loop, before the while statement.
from multiprocessing import Process, freeze_support
import undetected_chromedriver as uc
# Python Libraries
import time
chroptions = uc.ChromeOptions()
chroptions.add_argument('--no-first-run enable_console_log = True')
# driver = uc.Chrome(options=chroptions)
length = 8
count = 0
if count < length:
print("Im outside the loop")
while count < length:
print("This is loop ",count)
driver = uc.Chrome(options=chroptions)
time.sleep(2)
with driver:
print("Im inside the loop")
count =+ 1
driver.get("https://google.com")
print("Session ID: ", end='') #added to show your session ID is changing
print(driver.session_id)
driver.quit()

Can I integrate database with my selenium python script

I am having a scenario where first i will hit an API to get all the data and add that data in my selenium python script. But before this I want to automatically run my script whenever there is a entry in my data base.Means write now I am manually running my script I want to run it automatically every time when there is a new entry in my database.
API I am using is basically the entry in the database. So the flow is First there is an entry in DB about user, This entry I will get in a API form and whenever there is an entry My script should get run using the API.
Below is my selenium script
import time
import requests
from selenium import webdriver
base_url="https://www.fitotouch.com"
base_url1= "https://www.fitotouch.com/qitouch"
qty: int=2
cart_value : int = 1
driver = webdriver.Chrome('E:/Chrome driver/chromedriver.exe')
driver.maximize_window()
#function of our 'driver' object.
driver.implicitly_wait(10) #10 is in seconds
driver.get(base_url)
driver.implicitly_wait(10)
driver.find_element_by_name('password').send_keys("*****")
driver.implicitly_wait(10)
driver.find_element_by_class_name('arrow-icon').click()
data = requests.get('http://110.93.230.117:1403/api/order/5e439b7052fcf2189ccb5207').json()
print(data)
driver.implicitly_wait(10)
time.sleep(2.4)
#driver.find_element_by_xpath('//*[#id="header"]/div[2]/div/div[2]/div[3]/div[1]/a/span').click()
time.sleep(2.4)
driver.get('https://www.fitotouch.com/account/login/create')
time.sleep(2.4)
driver.switch_to.frame("accountFrame")
time.sleep(2.4)
driver.find_element_by_xpath('//*[#id="root"]/div/div/div/div[1]/div[1]/div/input').send_keys(data['FirstName'])
driver.find_element_by_xpath('//*[#id="root"]/div/div/div/div[1]/div[2]/div/input').send_keys(data['LastName'])
driver.find_element_by_xpath('//*[#id="root"]/div/div/div/div[2]/div/input').send_keys(data['Email'])
driver.find_element_by_xpath('//*[#id="root"]/div/div/div/div[3]/div/input').send_keys("*****")
driver.find_element_by_xpath('//*[#id="root"]/div/div/div/div[4]/div/input').send_keys("*****")
driver.find_element_by_xpath('//*[#id="root"]/div/div/div/button').click()
driver.get("https://www.fitotouch.com/fitoki/f-001-jing-fang-bai-du-wan")
#product_category=['driver.get("https://www.fitotouch.com/fitoki/f-001-jing-fang-bai-du-wan")','driver.get("https://www.fitotouch.com/soria-chinasor/style-02-hzewl")']
driver.execute_script("window.scrollTo(0, 150)")
time.sleep(2.4)
cart = driver.find_element_by_xpath('/html/body/div[1]/main/article/section/div[2]/div/section/article/section[1]/section/div/div[3]/div/div')
if qty > 0:
for i in range(qty):
cart.click()
time.sleep(2.4)
# print("quantity was" +qty)
else:
cart.click()
url = driver.current_url
print(url)
I'm not sure this is the best solution but what comes to mind is that you'd need to have the script running constantly on a while loop. Outside of the loop set a variable which represents the latest row index in the table.
Every x seconds perform a count on the number of rows in the table you're expecting an entry and then if the number of rows exceeds your previously set variable, run your function to go get whatever data. Finally, update your last row variable with the new value.
last_row = 10 # example that there are currently 10 rows in the DB table
while True:
# query database for number of rows "SELECT count(id) from table"
result = db.fetchone()
if result > last_row:
get_data()
last_row = result
time.sleep(10)
This solution would mean your script is always running but only doing the data collection when you get a new entry in your DB

open multiple webdrivers without login everytime

I am trying to run selenium using ThreadsPoolExecutor. The website requires a login and I am trying to speed up a step in what I am trying to do in the website. But everytime a thread opens chrome, I need to relogin and it sometimes just hangs. I login once first without using threads to do some processing. And from here on, I like to open a few chome webdrivers without the need to relogin. Is there a way around this? PS: website has no id and password strings in the url.
def startup(dirPath):
# Start the WebDriver, load options
options = webdriver.ChromeOptions()
options.add_argument("--disable-infobars")
options.add_argument("--enable-file-cookies")
params = {'behavior': 'allow', 'downloadPath': dirPath}
wd = webdriver.Chrome(options=options, executable_path=r"C:\Chrome\chromedriver.exe")
wd.execute_cdp_cmd('Page.setDownloadBehavior', params)
# wd.delete_all_cookies()
wd.set_page_load_timeout(30)
wd.implicitly_wait(10)
return wd
def webLogin(dID, pw, wd):
wd.get('some url')
# Login, clear any outstanding login in id
wd.find_element_by_id('username').clear()
wd.find_element_by_id('username').send_keys(dID)
wd.find_element_by_id('password').clear()
wd.find_element_by_id('password').send_keys(pw)
wd.find_element_by_css_selector('.button').click()
if __name__ == '__main__':
dirPath, styleList = firstProcessing()
loginAndClearLB(dID, dPw, dirPath) # calls startup & webLogin, this is also my 1st login
# many webdrivers spawned here
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
results = {executor.submit(addArtsToLB, dID, dPw, dirPath, style): style for style in styleList}
#Do other stuff
wd2 = startup(dirPath)
webLogin(dID, dPw, wd2)
startDL(wd2)
logOut(wd2, dirPath)
Any help would be greatly appreciated. Thanks!!
Like mentioned above, you could obtain the authentication token from the first login and than include it in all the subsequent requests.
However, another option (if you're using basic auth) is to just add the username and password into the URL, like:
https://username:password#your.domain.com
ok it looks like there is no solution yet for more complicated websites that do not use basic authentication. My modified Solution:
def webOpenWithCookie(wd, cookies):
wd.get('https://some website url/404')
for cookie in cookies:
wd.add_cookie(cookie)
wd.get('https://some website url/home')
return wd
def myThreadedFunc(dirPath, style, cookies): # this is the function that gets threaded
wd = startup(dirPath) # just starts chrome
wd = webOpenWithCookie(wd, cookies) # opens a page in the site and adds cookies to wd and then open your real target page. No login required now.
doSomethingHere(wd, style)
wd.quit() # close all the threads here better I think
if __name__ == '__main__':
dirPath, styleList = firstProcessing()
wd1 = startup(dirPath)
wd1 = webLogin(dID, dPw, wd1) # here i login once
cookies = wd1.get_cookies() # get the cookie from here
with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:
results = {executor.submit(myThreadedFunc, dirPath, style, cookies): style for style in styleList} # this spawns threads, but each thread will not need login although the compromise is it needs to go to 404 page first.

How to reuse a selenium driver instance during parallel processing?

To scrape a pool of URLs, I am paralell processing selenium with joblib. In this context, I am facing two challenges:
Challenge 1 is to speed up this process. In the moment, my code opens and closes a driver instance for every URL (ideally would be one for every process)
Challenge 2 is to get rid of the CPU-intensive while loop that I think I need to continue on empty results (I know that this is most likely wrong)
Pseudocode:
URL_list = [URL1, URL2, URL3, ..., URL100000] # List of URLs to be scraped
def scrape(URL):
while True: # Loop needed to use continue
try: # Try scraping
driver = webdriver.Firefox(executable_path=path) # Set up driver
website = driver.get(URL) # Get URL
results = do_something(website) # Get results from URL content
driver.close() # Close worker
if len(results) == 0: # If do_something() failed:
continue # THEN Worker to skip URL
else: # If do_something() worked:
safe_results("results.csv") # THEN Save results
break # Go to next worker/URL
except Exception as e: # If something weird happens:
save_exception(URL, e) # THEN Save error message
break # Go to next worker/URL
Parallel(n_jobs = 40)(delayed(scrape)(URL) for URL in URL_list))) # Run in 40 processes
My understanding is that in order to re-use a driver instance across iterations, the # Set up driver-line needs to be placed outside scrape(URL). However, everything outside scrape(URL) will not find its way to joblib's Parallel(n_jobs = 40). This would imply that you can't reuse driver instances while scraping with joblib which can't be true.
Q1: How to reuse driver instances during parallel processing in the above example?
Q2: How to get rid of the while-loop while maintaining functionality in the above-mentioned example?
Note: Flash and image loading is disabled in firefox_profile (code not shown)
1) You should first create a bunch of drivers: one for each process. And pass an instance to the worker. I don't know how to pass drivers to an Prallel object, but you could use threading.current_thread().name key to identify drivers. To do that, use backend="threading". So now each thread will has its own driver.
2) You don't need a loop at all. Parallel object itself iter all your urls (I hope I realy understend your intentions to use a loop)
import threading
from joblib import Parallel, delayed
from selenium import webdriver
def scrape(URL):
try:
driver = drivers[threading.current_thread().name]
except KeyError:
drivers[threading.current_thread().name] = webdriver.Firefox()
driver = drivers[threading.current_thread().name]
driver.get(URL)
results = do_something(driver)
if results:
safe_results("results.csv")
drivers = {}
Parallel(n_jobs=-1, backend="threading")(delayed(scrape)(URL) for URL in URL_list)
for driver in drivers.values():
driver.quit()
But I don't realy think you get profit in using n_job more than you have CPUs. So n_jobs=-1 is the best (of course I may be wrong, try it).

Selenium won't open a new url in for loop (Python & Chrome)

I can't seem to get Selenium to run this for loop correctly. It runs the first time without issue but when it starts the second loop to the program just stops running with no error message. I get the same results when I attempt this with a firefox browser. Maybe it has to do with me trying to start a browser instance when one is already running?
def bookroom(self):
sessionrooms=["611", "618"] #the list being used by the for loop
driver = webdriver.Firefox()
#for loop trying each room
for rooms in sessionrooms:
room=roomoptions[rooms][0]
sidenum=roomoptions[rooms][1]
bookingurl="https://coparooms.newschool.edu/copa-scheduler/Web/reservation.php?rid="+room+"&sid="+sidenum+"&rd="+self.startdate
driver.get(bookingurl)
time.sleep(3)
usernamefield = driver.find_element_by_id("email")
usernamefield.send_keys(self.username)
passwordfield = driver.find_element_by_id("password")
passwordfield.send_keys(self.password)
passwordfield.send_keys(Keys.RETURN)
time.sleep(5)
begin=Select(driver.find_element_by_name("beginPeriod"))
print(self.starttime)
begin.select_by_visible_text(convertarmy(self.starttime))
end=Select(driver.find_element_by_name("endPeriod"))
end.select_by_visible_text(convertarmy(self.endtime))
creates=driver.find_element_by_xpath("//button[contains(.,'Create')]")
creates.click() #clicks the confirm button
time.sleep(8)
xpathtest=driver.find_element_by_id("result").text
#if statement checks if creation was a success. If it is, exit the browser.
if "success" in xpathtest or "Success" in xpathtest:
print "Success!"
driver.exit()
else:
print "Failure"
time.sleep(2)
#if creation was not success try the next room in sessionrooms
Update:
I found the problem, it was just a matter of uneven spacing. Only some of the loop was "in the loop".
if "success" in xpathtest or "Success" in xpathtest:
print "Success!"
driver.exit()
You will want to break out of your loop here, as you're closing the driver, but using it in the next iteration of the loop without starting a new driver object

Categories