I attempted to follow this post [here][1]:
how to set proxy with authentication in selenium chromedriver python? to use an authenticated proxy with Chome Webdriver. It doesn't work and I get ERR_TUNNEL_CONNECTION_FAILED from Chrome. Is there a new solution to this or does the code from the linked StackOverflow need to be updated? I am using ChromeDriver v81 for Windows.
Related
I am writing automation for one of the site. I am able to access that site in chrome when I open that site manually in chrome or incognito mode. But when I am trying to launch it using automation in any browser its giving me site can not be reached error
Below is my simple code -
ChromeOptions options = new ChromeOptions();
options.AddArguments("excludeSwitches", "enable-automation");
options.AddArguments("useAutomationExtension", "False");
options.AddArgument("incognito");
IWebDriver driver = new ChromeDriver(options);
Console.WriteLine("Test MY Site");
driver.Manage().Timeouts().ImplicitWait = TimeSpan.FromSeconds(60);
driver.Manage().Window.Maximize();
driver.Manage().Cookies.DeleteAllCookies();
driver.Navigate().GoToUrl("https://example.net");
I tried with Selenium C#, Selenium Python , Cypress etc. but getting same error.
I also tried to launch site in chrome, edge, firefox and safari there also getting same error.
Note- I am using latest version of chrome, latest version of selenium and windows 10. I also tried after downgrading selenium and chrome
I have also tried setting proxy manual and automatic. also tried with no proxy setting. Internet connection is not at all issue since site is working fine when I am using that site manually in chrome
I found undetected-chrome driver helps in this case but I did find exe for undetected-chromedriver in order to launch site
Please help
I am using selenium in python, chrome as browser, I enabled the cookies in the host server and in the code, it works perfect when I run it locally, however, when I run it from Azure DevOps the I encounter an issue, from the logs I see that the webdriver disable the cookies.
Here is the code I use to enable cookies:
chrome_options.add_argument("--user-data-dir=C:\\Users\\" + getpass.getuser()+"\\AppData\\Local\\Google\\Chrome\\User Data\\Default")
What I am missing?
When I'm trying to open opensea.io with selenium it's giving Cloudfare captcha, even if I solve the captcha the captcha page is not redirecting to opensea.io
Update: Installing vpn solved this but there must be other ways.
driver.get("https://opensea.io")
Error screenshot given below.
cloudfare error
Edited:
There might be several reasons that are possibly causing this kind of problem:
Cloudflare blocked your I.P. Try using a new I.P. through a proxy (or VPN, Another ISP), and see if it works or not. (https://community.cloudflare.com/t/cant-bypass-cloudflare-captcha/200335/8)
Depending on Selenium versions and editions, it could explicitly tell the browser that it is a bot and allow the websites to know it is Selenium, so Cloudflare then blocks the request.
The browser is the problem. Try a different browser like Firefox.
Cloudflare or the website you are trying to reach cares about special cookies that are not available on a Selenium new browser (This was my wild guess, but it's not the case).
P.S.: I have tried to connect to this URL (https://opensea.io), and interestingly, it worked fine for me.
Here is some information about the environment I performed this action on:
Operation System: CentOS 7, Linux
Selenium Standalone Version: 4.0.0
Java Version: jre-8u311-linux-x64
The browser I used: Firefox
I want to use a proxy for selenium python using the firefox webdriver. I can assign firefox a proxy through the normal browser's GUI but I don't know how to do it Selenium Python. It uses port 8080 and it does not need authentication.
I am experiencing a very strange behaviour when testing Chrome via selenium webdriver.
Instead of navigating to pages like one would expect the 'get' command leads only to the download of tiny files (no type or.apsx files) from the target site.
Importantly - this behavior only occurs when I pass chrome_options as an argument
to the Chrome driver.
The same testing scripts work flawless with firefox driver.
Code:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options # tried with and without
proxy = '127.0.0.1:9951' # connects to a proxy application
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--proxy-server=%s' % proxy)
driver = webdriver.Chrome(chrome_options=chrome_options)
driver.get('whatismyip.com')
Leads to the automatic download of a file called download (no file extension, Size 2 Byte).
While calling other sites results in the download of small aspx files.
This all happens while the browser page remains blank and no interaction with
elements happen = the site is not loaded at all.
No error message, except element not found is thrown.
This is really strange.
Additional info:
I run Debian Wheezy 32 bit and use Python 2.7.
Any suggestions how to solve this issue?
I tried your code and captured the traffic on localhost using an SOCKS v5 proxy through SSH. It is definitely sending data through the proxy but no data is coming back. I have confirmed the proxy was working using Firefox.
I'm running Google Chrome on Ubuntu 14.04 LTS 64-bit. My Chrome browser gives me the following message when I try to configure a proxy in its settings menu:
When running Google Chrome under a supported desktop environment, the
system proxy settings will be used. However, either your system is not
supported or there was a problem launching your system configuration.
But you can still configure via the command line. Please see man
google-chrome-stable for more information on flags and environment
variables.
Unfortunately I don't have a man page for google-chrome-stable.
I also discovered that according to the selenium documentation Chrome is using the system wide proxy settings and according to their documentation it is unknown how to set the proxy in Chrome programmatically: http://docs.seleniumhq.org/docs/04_webdriver_advanced.jsp#using-a-proxy