I am trying to add cookies to the selenium web driver. I am specifying it like this
driver.add_cookie({'domain': '.facebook.com',
'expiry': 1456567765,
'httpOnly': False,
'name': 'fr',
'path': '/',
'sameSite': 'None',
'secure': True,
'value': "scsvdsvbrsdvasvdsgdssdv"})
Tries also this way
for i in cookies:
drvier.add_cookie(i)
However, I am getting this error.
Message: invalid session id
Stacktrace:
0 chromedriver 0x000000010f17e788 chromedriver + 4515720
Related
https://chromedevtools.github.io/devtools-protocol/tot/Emulation/#method-canEmulate
enter image description here
If you look at the Emulation.setUserAgentOverride section of the developer protocol site here, there is the ability to enter a userAgentMetadata parameter, but Python Selenium doesn't recognize the parameter.
I want to customize Sec-Ch-Ua.
When I use the return navigator.userAgentData code I want it to come out like this.
{'brands': [{'brand': '.Not/A)Brand', 'version': '99'}, {'brand': 'Google Chrome', 'version': '103'}, {'brand': 'Chromium', 'version': '103'}], 'mobile': False, 'platform': 'Windows'}
I'm calling get_cookies() on my selenium web driver. Of course we know this fetches the cookies for the current domain. However, many popular sites set cookies on both example.com and www.example.com.
Technically, it's not really a "separate domain" or even sub domain. I think nearly every website on the internet has the same site at the www sub domain as it does the root.
So is it still impossible to save cookies for the two domains, since one is a sub domain? I know the answer is complicated if you want to save cookies for all domains, but I figured this is kind of different since they really are the same domain.
Replicate it with this code:
from selenium import webdriver
import requests
driver = webdriver.Firefox()
driver.get("https://www.instagram.com/")
print(driver.get_cookies())
output:
[{'name': 'ig_did', 'value': 'F5FDFBB0-7D13-4E4E-A100-C627BD1998B7', 'path': '/', 'domain': '.instagram.com', 'secure': True, 'httpOnly': True, 'expiry': 1671083433}, {'name': 'mid', 'value': 'X9hOqQAEAAFWnsZg8-PeYdGqVcTU', 'path': '/', 'domain': '.instagram.com', 'secure': True, 'httpOnly': False, 'expiry': 1671083433}, {'name': 'ig_nrcb', 'value': '1', 'path': '/', 'domain': '.instagram.com', 'secure': True, 'httpOnly': False, 'expiry': 1639547433}, {'name': 'csrftoken', 'value': 'Yy8Bew6500BinlUcAK232m7xPnhOuN4Q', 'path': '/', 'domain': '.instagram.com', 'secure': True, 'httpOnly': False, 'expiry': 1639461034}]
Then load the page in a fresh browser instance and check yourself. You'll see www is there.
The main domain looks fine though:
My idea is to use requests library and get all cookies via REST query?
import requests
# Making a get request
response = requests.get('https://www.instagram.com/')
# printing request cookies
print(response.cookies)
Domain
To host your application on the internet need a domain name. Domain names act as a placeholder for the complex string of numbers known as an IP address. As an example,
https://www.instagram.com/
With the latest firefox v84.0 accessing the Instagram application the following cookies are observed within the https://www.instagram.com domain:
Subdomain
A subdomain is an add-on to your primary domain name. For example, when using the sites e.g. Craigslist, you are always using a subdomain like reno.craigslist.org, or sfbay.craigslist.org. You will be automatically be forwarded to the subdomain that corresponds to your physical location. Essentially, a subdomain is a separate part of your website that operates under the same primary domain name.
Reusing cookies
If you have stored the cookie from domain example.com, these stored cookies can't be pushed through the webdriver session to any other different domanin e.g. example.edu. The stored cookies can be used only within example.com. Further, to automatically login an user in future, you need to store the cookies only once, and that's when the user have logged in. Before adding back the cookies you need to browse to the same domain from where the cookies were collected.
Demonstration
As an example, you can store the cookies once the user have logged in within an application as follows:
from selenium import webdriver
import pickle
driver = webdriver.Chrome()
driver.get('http://demo.guru99.com/test/cookie/selenium_aut.php')
driver.find_element_by_name("username").send_keys("abc123")
driver.find_element_by_name("password").send_keys("123xyz")
driver.find_element_by_name("submit").click()
# storing the cookies
pickle.dump( driver.get_cookies() , open("cookies.pkl","wb"))
driver.quit()
Later at any point of time if you want the user automatically logged-in, you need to browse to the specific domain /url first and then you have to add the cookies as follows:
from selenium import webdriver
import pickle
driver = webdriver.Chrome()
driver.get('http://demo.guru99.com/test/cookie/selenium_aut.php')
# loading the stored cookies
cookies = pickle.load(open("cookies.pkl", "rb"))
for cookie in cookies:
# adding the cookies to the session through webdriver instance
driver.add_cookie(cookie)
driver.get('http://demo.guru99.com/test/cookie/selenium_cookie.php')
Reference
You can find a detailed discussion in:
org.openqa.selenium.InvalidCookieDomainException: Document is cookie-averse using Selenium and WebDriver
I am using selenium to do a bit of automation, but I would like to be able to keep the browser windows open even after the python console has been closed.
Here are my current settings for the webdriver:
capabilities = {
'browserName': 'chrome',
'version': '',
'platform': 'ANY',
'javascriptEnabled': True,
'chromeOptions': {
'useAutomationExtension': False,
'forceDevToolsScreenshot': True,
'detach': False,
'args': ['--start-maximized', '--disable-infobars', '--log-level=3']
}
}
driver = webdriver.Chrome(desired_capabilities=capabilities)
Does anyone know how I can achieve this? Thanks.
I am trying to add python requests session cookies to my selenium webdriver.
I have tried this so far
for c in self.s.cookies :
driver.add_cookie({'name': c.name, 'value': c.value, 'path': c.path, 'expiry': c.expires})
This code is working fine for PhantomJS whereas it's not for Firefox and Chrome.
My Questions:
Is there any special iterating of cookiejar for Firefox and Chrome?
Why it is working for PhantomJS?
for cookie in s.cookies: # session cookies
# Setting domain to None automatically instructs most webdrivers to use the domain of the current window
# handle
cookie_dict = {'domain': None, 'name': cookie.name, 'value': cookie.value, 'secure': cookie.secure}
if cookie.expires:
cookie_dict['expiry'] = cookie.expires
if cookie.path_specified:
cookie_dict['path'] = cookie.path
driver.add_cookie(cookie_dict)
Check this for a complete solution https://github.com/cryzed/Selenium-Requests/blob/master/seleniumrequests/request.py
I am using Python Selenium Chrome WebDriver
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
and
self.driver = webdriver.Chrome(chrome_options=options, desired_capabilities=capabilities)
print self.driver.get('https://192.168.178.20:1337/login?email=me#domain.com&password=mypassword')
print self.driver.get('https://192.168.178.20:1337/this/that?name=john')
Before I didn't need to authenticate and my GET went through, but now I do using PUT request with email and password params. I have tested the PUT in POSTMAN and it worked fine.
Once authenticated, I want to browse to another URL using GET, but I am getting a 500 most likely because it didn't retain that I have authenticated.
How can I check that my login worked? How do I retrieve the response?
Do I need to retrieve & save some kind of token or cookie for the 2nd request go through?
Console log
headless_chrome > Auth
None
headless_chrome > GET
None
headless_chrome > CONSOLE
headless_chrome console > {u'source': u'network', u'message': u'https://192.168.178.20:1337/this/that?name=john 0:0 Failed to load resource: the server responded with a status of 500 (Internal Server Error)', u'timestamp': 1479212713208, u'level': u'SEVERE'}
headless_chrome > title:
{'_file_detector': <selenium.webdriver.remote.file_detector.LocalFileDetector object at 0x7f8507a25f50>,
'_is_remote': False,
'_mobile': <selenium.webdriver.remote.mobile.Mobile object at 0x7f8507a25d90>,
'_switch_to': <selenium.webdriver.remote.switch_to.SwitchTo instance at 0x7f8507a336c8>,
'capabilities': {u'acceptSslCerts': True,
u'applicationCacheEnabled': False,
u'browserConnectionEnabled': False,
u'browserName': u'chrome',
u'chrome': {u'chromedriverVersion': u'2.21.371461 (633e689b520b25f3e264a2ede6b74ccc23cb636a)',
u'userDataDir': u'/tmp/.com.google.Chrome.ybR9Fm'},
u'cssSelectorsEnabled': True,
u'databaseEnabled': False,
u'handlesAlerts': True,
u'hasTouchScreen': False,
u'javascriptEnabled': True,
u'locationContextEnabled': True,
u'mobileEmulationEnabled': False,
u'nativeEvents': True,
u'platform': u'Linux',
u'rotatable': False,
u'takesHeapSnapshot': True,
u'takesScreenshot': True,
u'version': u'50.0.2661.102',
u'webStorageEnabled': True},
'command_executor': <selenium.webdriver.chrome.remote_connection.ChromeRemoteConnection object at 0x7f8507a25cd0>,
'error_handler': <selenium.webdriver.remote.errorhandler.ErrorHandler object at 0x7f8507a25d50>,
'service': <selenium.webdriver.chrome.service.Service object at 0x7f85083bc3d0>,
'session_id': u'72a5ce48d950be26b3f33de4adb34428',
'w3c': False}
headless_chrome > except else
clean up Selenium browser
Selenium webdriver has no .put method. You should use .get for both authentication and then navigating to another url. Ideally this should work.
If that works manually then it should work through webdriver also.