Chrome webdriver closes automatically - python

I am trying to use selenium for scraping and when I try to start the WebDriver it automatically closes. I've tried everything and it still doesn't work.
def launchBrowser():
ch_options = webdriver.ChromeOptions()
ch_options.add_experimental_option("detach",True)
ch_options.add_experimental_option('excludeSwitches', ['enable-logging'])
ch_driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()),options=ch_options)
ch_driver.get(url)
launchBrowser()
I tried to implement some solutions I have seen but it doesn't work.

Related

Prevent Safari from closing in Mac using Selenium Python?

from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
safari_option = Options()
safari_option.add_experimental_option("detach", True)
# Prevent closing browser after done execution
driver = webdriver.Safari(options=safari_option)
driver.get("https://www.google.com/?client=safari")
elem = driver.find_element(By.NAME, "q")
elem.clear()
elem.send_keys("test")
I want to prevent safari from closing after the webdriver has finished so that I can see the process of automating another website clearly one by one.
I tried using the safari_option.add_experimental_option("detach", True)
as has been done here
Python selenium keep browser open
for chrome. But I'm not quite sure why it did not work for Safari instead.Seems like a simple problem but couldn't find an answer for it on google or maybe Im blind. Im pretty new to Selenium btw.
Thanks

selenium in python, how to ignore errors without closing the browser

I'm doing an automation using Selenium in Python
How could I simply ignore an error that happens on the site without it closing the browser?
I looked in several places about and I didn't find anything that helped me
You can do:
# Needed libs
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
chrome_options.add_experimental_option("detach", True)
driver = webdriver.Chrome(options=chrome_options)
Then, if something fails, your browser will not be closed.

driver.get() stopped working in Headless -mode (Chrome)

recently a scraper I made stopped working in headless mode. I've tried with both firefox and Chrome. Notable things are that I am using seleniumwire to access API requests, and that I am using ChromeDriverManager to get the driver. Current version for Chrome/93.0.4577.63.
I've tried modifying the User-Agent manually as can be seen in the below code, in case the website added some checks blocking HeadlessChrome/93.0.4577.63 which is the original User-Agent. This did not help.
When running the script in regular mode, it works. When running in headless mode, the below code would output [] meaning that driver.get(url) does not return any requests. I run this code daily and it stopped working on 8.9.2021 I think, during the day.
from selenium.webdriver.chrome.options import Options as chromeOptions
from seleniumwire import webdriver
from webdriver_manager.chrome import ChromeDriverManager
options = {
'suppress_connection_errors': False,
'connection_timeout': None
}
chrome_options = chromeOptions()
chrome_options.add_argument("--start-maximized")
chrome_options.add_argument("--incognito")
chrome_options.add_argument('--log-level=2')
chrome_options.add_argument("--window-size=1920,1080")
chrome_options.add_argument("--disable-extensions")
chrome_options.add_argument('--allow-running-insecure-content')
chrome_options.add_argument('--headless')
driver = webdriver.Chrome(ChromeDriverManager().install(), seleniumwire_options=options, chrome_options=chrome_options)
userAgent = driver.execute_script("return navigator.userAgent;")
userAgent = userAgent.replace('Headless', '')
driver.execute_cdp_cmd('Network.setUserAgentOverride', {"userAgent": userAgent})
url = 'my URL goes here'
driver.get(url)
print(driver.requests)
Same issue with FireFox, headless does not work but regular browsing does. Any idea what might cause this problem and what could solve it? I've also tried adding the following arguments to Chrome options without any luck:
chrome_options.add_argument("--proxy-server='direct://'")
chrome_options.add_argument("--proxy-bypass-list=*")
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument('--disable-dev-shm-usage')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--ignore-certificate-errors')
chrome_options.add_argument('--headless')
chrome_options.add_argument('--ignore-certificate-errors-spki-list')
chrome_options.add_argument('--ignore-ssl-errors')
This may have been solved - I noticed that I first set the window size to maximize and after that set it to 1920,1080. When I removed the argument to maximize
chrome_options.add_argument("--start-maximized") the problem disappeared and now the script works once again.
I'm not sure if this actually solved it or whether it was something else, since Selenium is a bit finicky and sometimes data just won't load the same way for the same web page, but at least now it works.

How to get data like xpath and ids from a minimized webpage using selenium

the issue am dealing with is trying to get selenium to run in the background while getting data like webpage elements xpath and ids and being able to use it while remaining activity in the background and not keep poping up browser tab in front of other running programs
You should try running your browser in headless mode. Here is a code snippet of the function that gives the instance of chrome in headless mode.
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
def browser_open(headless=False):
options = Options()
options = webdriver.ChromeOptions()
options.add_argument("disable-gpu")
options.add_argument("no-default-browser-check")
options.add_argument("no-first-run")
options.add_argument("no-sandbox")
options.add_argument("window-size=1300x744")
if headless == True:
options.add_argument("headless")
chrome_browser = webdriver.Chrome(executable_path=os.path.join(os.getcwd(), "chromedriver"), chrome_options=options)
chrome_browser.maximize_window()
return chrome_browser

Selenium webdriver not opening websites in default chrome profile

I have tried selenium webdriver in Python and it works fine. But when I try to open default chrome profile it doesn't open the websites.
The code is
chromeOptions = webdriver.ChromeOptions()
chromeOptions.add_argument("user-data-
dir=/Users/prajwal/Library/Application Support/Google/Chrome")
capability = DesiredCapabilities.CHROME
capability["pageLoadStrategy"] = "normal"
driver = webdriver.Chrome(desired_capabilities=capability,
chrome_options=chromeOptions)
driver.get("https://www.google.com")
The window opens in this case. But it doesnt open the website. However, it works fine if I remove
chromeOptions.add_argument("user-data-
dir=/Users/prajwal/Library/Application Support/Google/Chrome")
Where am I going wrong ?

Categories