Retrieving no cookie from GET cookie - python

When I would try to get the set cookie of an response instance I would get an None value when I use my actual login username and password.
import urllib2, urllib, cookielib
jar = cookielib.CookieJar()
cookie = urllib2.HTTPCookieProcessor(jar)
opener = urllib2.build_opener(cookie)
data = urllib.urlencode({'email':'user#hotmail.com','pass':'password','login':'Log+In'})
req = urllib2.Request('http://www.facebook.com/login.php')
response = opener.open(req, data)
response = opener.open(req, data) #I open it twice on purpose
if "Logout" in response.read():
print("Logged In")
else:
print("Not Logged In")
cookie_header = response.headers.get("Set-Cookie")
print(cookie_header)
I know how to set the cookie header, but the problem is a None value is being assigned to cookie_header when I use my actual credentials. How do I get the cookie?

By rearranging the code I was able to fix it up.
response = opener.open(req, data)
cookie_header = response.headers.get("Set-Cookie")
response = opener.open(req, data) #I open it twice on purpose
Because the cookie was set on the first open.

The cookie will be set on the first response, you are testing the second instead. Facebook won't set another cookie here.
You could just get the cookie from the CookieJar object:
cookie = list(cookie.cookiejar)[0]
You'd have a much easier time of it if you used the request library instead:
import requests
session = requests.Session()
data = {'email':'user#hotmail.com','pass':'password','login':'Log+In'}
form = session.get('http://www.facebook.com/login.php')
response = session.post('http://www.facebook.com/login.php', data/data)
cookie_value = session.cookie['datr']

Related

send cookies inside post request

trying to send Post request with specific cookies that on my pc from get request
i searched in google then i found
opener = urllib2.build_opener() # send the cookies
opener.addheaders.append(('Cookie', cookies)) # send the cookies
f = opener.open("http://example")
this code is useful and helped me
but can someone explain it for me and tell me if f variable makes request ?
i don't need cookielib just my example :)
if i typed
url = 'http://example' # to know the values type any password to know the cookies
values = {"username" : "admin",
"passwd" : "1",
"lang" : "" ,
"option" : "com_login",
"task" : "login",
"return" : "aW5kZXgucGhw",
} # request with the hash
data = urllib.urlencode(values)
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
result = response.read()
cookies=response.headers['set-cookie'] #to get the cookies
opener = urllib2.build_opener() # send the cookies
opener.addheaders.append(('Cookie', cookies)) # send the cookies
f = opener.open("http://example.com)
What will happened two post requests !?
The request gets sent when you call the open() method on the opener object. The f variable contain a reference to the open connection in case you want to do something else to it later on (such as close it again).
Your comments that say 'send cookies' are in the wrong place, the line where you call append is just preparing the request, it only gets sent when you call open.

Retrieve OAuth code in redirect URL provided as POST response

Python newbie here, so I'm sure this is a trivial challenge...
Using Requests module to make a POST request to the Instagram API in order to obtain a code which is used later in the OAuth process to get an access token. The code is usually accessed on the client-side as it's provided at the end of the redirect URL.
I have tried using Request's response history method, like this (client ID is altered for this post):
OAuthURL = "https://api.instagram.com/oauth/authorize/?client_id=cb0096f08a3848e67355f&redirect_uri=https://www.smashboarddashboard.com/whathappened&response_type=code"
OAuth_AccessRequest = requests.post(OAuthURL)
ResHistory = OAuth_AccessRequest.history
for resp in ResHistory:
print resp.status_code, resp.url
print OAuth_AccessRequest.status_code, OAuth_AccessRequest.url
But the URLs this returns are not revealing the code number string, instead, the redirect just looks like this:
302 https://api.instagram.com/oauth/authorize/?client_id=cb0096f08a3848e67355f&redirect_uri=https://www.dashboard.com/whathappened&response_type=code
200 https://instagram.com/accounts/login/?force_classic_login=&next=/oauth/authorize/%3Fclient_id%cb0096f08a3848e67355f%26redirect_uri%3Dhttps%3A//www.smashboarddashboard.com/whathappened%26response_type%3Dcode
Where if you do this on the client side, using a browser, code would be replaced with the actual number string.
Is there a method or approach I can add to the POST request that will allow me to have access to the actual redirect URL string that appears in the web browser?
It should work in a browser if you are already logged in at Instagram. If you are not logged in you are redirected to a login page:
https://instagram.com/accounts/login/?force_classic_login=&next=/oauth/authorize/%3Fclient_id%3Dcb0096f08a3848e67355f%26redirect_uri%3Dhttps%3A//www.smashboarddashboard.com/whathappened%26response_type%3Dcode
Your Python client is not logged in and so it is also redirected to Instagram's login page as shown by the value of OAuth_AccessRequest.url :
>>> import requests
>>> OAuthURL = "https://api.instagram.com/oauth/authorize/?client_id=cb0096f08a3848e67355f&redirect_uri=https://www.smashboarddashboard.com/whathappened&response_type=code"
>>> OAuth_AccessRequest = requests.get(OAuthURL)
>>> OAuth_AccessRequest
<Response [200]>
>>> OAuth_AccessRequest.url
u'https://instagram.com/accounts/login/?force_classic_login=&next=/oauth/authorize/%3Fclient_id%3Dcb0096f08a3848e67355f%26redirect_uri%3Dhttps%3A//www.smashboarddashboard.com/whathappened%26response_type%3Dcode'
So, to get to the next step, your Python client needs to login. This requires that the client extract and set fields to be posted back to the same URL. It also requires cookies and that the Referer header be properly set. There is a hidden CSRF token that must be extracted from the page (you could use BeautifulSoup for example), and form fields username and password must be set. So you would do something like this:
import requests
from bs4 import BeautifulSoup
OAuthURL = "https://api.instagram.com/oauth/authorize/?client_id=cb0096f08a3848e67355f&redirect_uri=https://www.smashboarddashboard.com/whathappened&response_type=code"
session = requests.session() # use session to handle cookies
OAuth_AccessRequest = session.get(OAuthURL)
soup = BeautifulSoup(OAuth_AccessRequest.content)
form = soup.form
login_data = {form.input.attrs['name'] : form.input['value']}
login_data.update({'username': 'your username', 'password': 'your password'})
headers = {'Referer': OAuth_AccessRequest.url}
login_url = 'https://instagram.com{}'.format(form.attrs['action'])
r = session.post(login_url, data=login_data, headers=headers)
>>> r
<Response [400]>
>>> r.json()
{u'error_type': u'OAuthException', u'code': 400, u'error_message': u'Invalid Client ID'}
Which looks like it will work once provided a valid client ID.
As an alternative, you could look at mechanize which will handle the form submission for you, including the hidden CSRF field:
import mechanize
OAuthURL = "https://api.instagram.com/oauth/authorize/?client_id=cb0096f08a3848e67355f&redirect_uri=https://www.smashboarddashboard.com/whathappened&response_type=code"
br = mechanize.Browser()
br.open(OAuthURL)
br.select_form(nr=0)
br.form['username'] = 'your username'
br.form['password'] = 'your password'
r = br.submit()
response = r.read()
But this doesn't work because the referer header is not being set, however, you could use this method if you can figure out a solution to that.

Unable to retain login credentials across pages while using requests

I am pretty new to using urllib and requests module in python. I am trying to access a wikipage in my company's website which requires me to provide my login credentials through a pop up window when I try to access it through a browser.
I was able to write the following script to successfully access the webpage and read it using the following piece of code:
import sys
import urllib.parse
import urllib.request
import getpass
import http.cookiejar
wiki_page = 'http://wiki.company.com/wiki_page'
top_level_url = 'http://login.company.com/'
username = input("Enter Username: ")
password = getpass.getpass('Enter Password: ')
# Authenticate with login server and fetch the wiki page
password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()
cj = http.cookiejar.CookieJar()
password_mgr.add_password(None, top_level_url, username, password)
handler = urllib.request.HTTPBasicAuthHandler(password_mgr)
opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj),handler)
opener.open(wiki_page)
urllib.request.install_opener(opener)
with urllib.request.urlopen(wiki_page) as response:
# Do something
But now I need to use requests module to do the same. I tried using several methods including sessions but could not get it to work. The following is the piece of code which I think close to the actual solution but it gives Response 200 in the first print and Response 401 in the second print:
s = requests.Session()
print(s.post('http://login.company.com/', auth=(username, password))) # I have tried s.post() as well as s.get() in this line
print(s.get('http://wiki.company.com/wiki_page'))
The site uses the Basic Auth authorization scheme; you'll need to send the login credentials with each request.
Set the Session.auth attribute to a tuple with the username and password on the session:
s = requests.Session()
s.auth = (username, password)
response = s.get('http://wiki.company.com/wiki_page')
print(response.text)
The urllib.request.HTTPPasswordMgrWithDefaultRealm() object would normally only respond to challenges on URLs that start with http://login.company.com/ (so any deeper path will do too), and not send the password elsewhere.
If the simple approach (setting Session.auth) doesn't work, you'll need to find out what response is returned by accessing http://wiki.company.com/wiki_page directly, which is what your original code does. If the server redirects you to a login page, where you then use the Basic Auth information, you can replicate that:
s = requests.Session()
response = s.get('http://wiki.company.com/wiki_page', allow_redirects=False)
if response.status_code in (302, 303):
target = response.headers['location']
authenticated = s.get(target, auth=(username, password))
# continue on to the wiki again
response = s.get('http://wiki.company.com/wiki_page')
You'll have to investigate carefully what responses you get from the server. Open up an interactive console and see what responses you get back. Look at response.status_code and response.headers and response.text for hints. If you leave allow_redirects to the default True, look at response.history to see if there were any intermediate redirections.

Python - Using Set-Cookie on for cookie use not work

When I get the Set-Cookie and try to use it, I wont seem that I'm logged in Facebook...
import urllib, urllib2
data = urllib.urlencode({"email":"swagexample#hotmail.com", "pass":"password"})
request = urllib2.Request("http://www.facebook.com/login.php", data)
request.add_header("User-Agent", "Mozilla 5.0")
response = urllib2.urlopen(request)
cookie = response.headers.get("Set-Cookie")
new_request = urllib2.Request("http://www.facebook.com/login.php")
new_request.add_header("User-Agent", "Mozilla 5.0")
new_request.add_header("Cookie", cookie)
new_response = urllib2.urlopen(new_request)
if "Logout" in new_response.read():
print("Logged in.") #No output
Why?
First, Set-Cookie header format is different from Cookie header.
Set-Cookie header contains additional information (doamin, expire, ...), you need to convert them to use it for Cookie header.
cookie = '; '.join(
x.split(';', 1)[0] for x in response.headers.getheaders("Set-Cookie")
)
Even though you do above, you will still not get what you want, because default urllib2 handler does not handle cookie for redirect.
Why don't you use urllib2.HTTPCookieProcessor as you did before?

Extract new cookies after login with urllib2

I ran into a problem which I can not solve. I am able to successfully get the cookies and use them after the login to a web application. The problem is that the web application sets new cookies after a couple of clicks which I need to have.
How do I extract, or get the additional, cookies after the login? Here is my code so far:
def _login_to_page(self,url):
cj = cookielib.CookieJar()
cookiehandler = urllib2.HTTPCookieProcessor(cj)
proxy_support = urllib2.ProxyHandler({"https" : self._proxy})
opener = urllib2.build_opener(cookiehandler, proxy_support)
try:
login_post_data = {'op':'login','user':self._username,'passwd':self._password,'api_type':'json'}
response = opener.open(str(self._path_to_login_url), urllib.urlencode(login_post_data), self._request_timeout).read()
if response:
print "[+] Login successful"
self._login_cookies = cj
else:
"[-] Login has probably failed. Wrong Credentials?"
def get_url_loggedin(self,url):
#the variable self._login_cookies are the cookies from the previous login
cookiehandler = urllib2.HTTPCookieProcessor(self._login_cookies)
proxy_support = urllib2.ProxyHandler({"http" : self._proxy})
opener = urllib2.build_opener(cookiehandler, proxy_support)
urllib2.install_opener(opener)
try:
url_response = opener.open(url, None, self._request_timeout).read()
except Exception,e:
print "[-] Could not read page: "
print "[??] Error: " +repr(e)
Sorry if my English is a bit weird I'm not a native speaker.
After the application has set the cookies you want, you should do cj.save( 'cookies.txt' ) to save the current set cookies to that file, and use cj.load('cookies.txt') to load them at application start.
See the cookielib documentation

Categories