using proxy with selenium webdriver firefox in python not working - python

I would like to make a program that modifies my ip when I go to consult a site with selenium, I use webdriver firefox, unfortunately the site that I use for my tests returns my ip and not the ip that I indicated in the options, could you tell me the error please.
The program launches and the firefox page opens (I don't use headless for the test), but it's my ip that is returned and not the one from the specified proxy.
here is my program
from selenium import webdriver
options = webdriver.FirefoxOptions()
proxy = f'{"137.74.65.101"}:{"80"}'
options.add_argument(f'--proxy-server={proxy}')
driver = webdriver.Firefox(options=options)
driver.get('https://httpbin.org/ip')

Related

Selenium log in through automation

Is selenium only for testing?
I created a script to log in to canvas, a website that my uni uses for class material
however, it seems that it only logs in on the browser generated by the driver, and I will still have to manually log in on the actual browser.
Is there a way for me to make it so that I won't have to log in on the actual browser after running my script?
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
import time
PATH = r"C:the path \msedgedriver.exe"
driver = webdriver.Edge(PATH)
driver.get("the website")
driver.maximize_window()
#sign in page
user = driver.find_element(By.ID,"username")
user.send_keys("username")
pw = driver.find_element(By.ID,"password")
pw.send_keys("password")
pw.send_keys(Keys.RETURN)
driver.implicitly_wait(3)
#authentification
driver.switch_to.frame(driver.find_element(By.XPATH,"//iframe[#id='duo_iframe']"))
remember_me = driver.find_element(By.XPATH, "//input[#type='checkbox']")
remember_me.click()
duo = driver.find_element(By.XPATH, '//button[text()="Send Me a Push "]')
duo.click()
As per Selenium's Homepage
Selenium automates browsers. That's it! What you do with that power is
entirely up to you.
Primarily it is for automating web applications for testing purposes,
but is certainly not limited to just that.
Boring web-based administration tasks can (and should) also be
automated as well.
No, you won't be able to reconnect to the already opened Browsing Context.
Even if you are able to extract the ChromeDriver and ChromeSession attributes e.g. Session ID, Cookies, UserAgent and other session attributes from the already initiated ChromeDriver and Chrome Browsing Session still you won't be able to change the set of attributes of the ChromeDriver.
A cleaner way would be to span a new set of ChromeDriver and Chrome Browser instance with the desired set of configurations.

Certain Aspects of a Website Are Being Blocked When Using Selenium

I have been trying to create a script to login and post an ad. However when directing selenium either the "post an ad" page or the "login" page, the URL doesn't load, and gives me a solid white screen. However when I direct the URL to the homepage or any other page, other than the ones already listed, the website loads just fine.
My problem solving:
I have read that my IP might have got blocked, however when I change my public IP address through my router, the problem persists.
I am currently unsure if the source of the problem stands from my IP or the possibly the website's scripts are only meant to detect and block scripts on certain pages.
I am also using chromedriver version 81.0.4044.69 and Chrome version Version 81.0.4044.129 (Official Build) (64-bit)
Here is my code:
from selenium import webdriver
browser = webdriver.Chrome()
browser.get('https://www.kijiji.ca/t-login.html?targetUrl=L3Atc2VsZWN0LWNhdGVnb3J5Lmh0bWw/XnIrUFRKMS9oU1cxc29PdXAxbjUveFE9PQ--')
The result, and no errors
Blank White Screen Found When Running The Program
Try faking the user-agent (some sites really are picky in regards to that) using this module
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from fake_useragent import UserAgent
options = Options()
ua = UserAgent()
userAgent = ua.random
print(userAgent)
options.add_argument(f'--user-agent={userAgent}')
driver = webdriver.Chrome(options=options)
driver.get("https://www.google.co.in")
driver.quit()

Headless Chrome (with selenium) CANNOT request with PROXY server, but requests can?

I am trying to use Chrome along with python webdriver + selenium, but it seems not working when I set the proxy settings? Here is my code:
from selenium import webdriver
PROXY = 'http://42.115.88.220:53281'
chromeOptions = webdriver.ChromeOptions()
chromeOptions.add_argument('--proxy-server=%s' % PROXY)
chromeOptions.add_argument("ignore-certificate-errors")
wbe = webdriver.Chrome(options=chromeOptions)
wbe.get("http://icanhazip.com")
When I run the above codes, the browser gives me: "This site can’t be reached" error:
This site can’t be reached
The connection was reset.
Try:
Checking the connection
Checking the proxy and the firewall
Running Windows Network Diagnostics
ERR_CONNECTION_RESET
Some Efforts: I tried requests with my proxy server, and it works. So it shouldn't be the problem of my proxy server.
import requests
proxies = {"http": "http://42.115.88.220:53281"}
r = requests.get("http://icanhazip.com", proxies = proxies)
print (r.status_code)
This gives me a response code of 200 and good response.
Goal: My final goal is to build a web-crawler with headless chrome with PROXY, so now I am testing a non-headless one first. But it seems there have been something wrong with this PROXY issue.
I would be really appreciated if anyone could help me out with this problem!!!
Try this.
For me it seems that you have used wrong type of headless mode. For chrome selenium browsers it's important to set --headless argument correct.
from selenium import webdriver
PROXY = 'http://ip:port'
chromeOptions = webdriver.ChromeOptions()
chromeOptions.add_argument('--proxy-server=%s' % PROXY)
chromeOptions.add_argument("ignore-certificate-errors")
# Headless mode for chrome browser
chromeOptions.add_argument('--headless=chrome')
wbe = webdriver.Chrome('your_driver_path_or_service', options=chromeOptions)
wbe.get("http://icanhazip.com")
print(wbe.title)
print(wbe.current_url)
print(wbe.page_source)
# Output:
# http://icanhazip.com/
# <html><head><meta name="color-scheme" content="light dark"></head><body><pre
# style="word-wrap: break-word; white-space: pre-wrap;">your ip
# </pre></body></html>

Force Selenium Chrome Driver to use QUIC instead of TCP

I am working on downloading HAR from Chrome for YouTube through Selenium Python Script.
Code Snippet:
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--proxy-server={0}".format(url))
chrome_options.add_argument("--enable-quic")
self.driver = webdriver.Chrome(chromedriver,chrome_options = chrome_options)
self.proxy.new_har(args['url'], options={'captureHeaders': True})
self.driver.get(args['url'])
result = json.dumps(self.proxy.har, ensure_ascii=False)
I want QUIC to be used whenever I download HAR but when I look at the packets through Wireshark Selenium driver is using TCP only. Is there a way to force Chrome Driver to use QUIC? Or Is there an alternate to BMP?
A similar thing has been asked for Firefox in this question How to capture all requests made by page in webdriver? Is there any alternative to Browsermob? and there was a solution with Selenium alone without need of any BMP. So is it possible for Chrome?
Workaround for this problem could be: start Chrome normally (with your default profile or create another profile) and enable quic manually. Then start chromedriver with your profile loaded.
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = webdriver.ChromeOptions()
options.add_argument("user-data-dir=/home/user/.config/google-chrome")
driver = webdriver.Chrome(executable_path="/home/user/Downloads/chromedriver", chrome_options=options)

how can I use selenium with my normal browser

Is it possible to connect selenium to the browser I use normally instead a driver? For normal browsing I am using chrome with several plugins - add block plus, flashblock and several more. I want to try to load a site using this specific configuration. How can I do that?
p.s - I dont want to connect only to an open browser like in this question :
How to connect to an already open browser?
I dont care if I spawn the process using a driver. I just want the full browser configuration - cookies,plugins,fonts etc.
Thanks
First, you need to download the ChromeDriver, then either put the path to the executeable to the PATH environment variable, or pass the path in the executable_path argument:
from selenium import webdriver
driver = webdriver.Chrome(executable_path='/path/to/executeable/chrome/driver')
In order to load extensions, you would need to set ChromeOptions:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = webdriver.ChromeOptions()
options.add_extension('Adblock-Plus_v1.4.1.crx')
driver = webdriver.Chrome(chrome_options=options)
You can also save the chrome user profile you have and load it to the ChromeDriver:
options = webdriver.ChromeOptions()
options.add_argument('--user-data-dir=/path/to/my/profile')
driver = webdriver.Chrome(chrome_options=options)
See also:
Running Selenium WebDriver using Python with extensions (.crx files)
ChromeDriver capabilities/options

Categories