trying to send Post request with specific cookies that on my pc from get request
i searched in google then i found
opener = urllib2.build_opener() # send the cookies
opener.addheaders.append(('Cookie', cookies)) # send the cookies
f = opener.open("http://example")
this code is useful and helped me
but can someone explain it for me and tell me if f variable makes request ?
i don't need cookielib just my example :)
if i typed
url = 'http://example' # to know the values type any password to know the cookies
values = {"username" : "admin",
"passwd" : "1",
"lang" : "" ,
"option" : "com_login",
"task" : "login",
"return" : "aW5kZXgucGhw",
} # request with the hash
data = urllib.urlencode(values)
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
result = response.read()
cookies=response.headers['set-cookie'] #to get the cookies
opener = urllib2.build_opener() # send the cookies
opener.addheaders.append(('Cookie', cookies)) # send the cookies
f = opener.open("http://example.com)
What will happened two post requests !?
The request gets sent when you call the open() method on the opener object. The f variable contain a reference to the open connection in case you want to do something else to it later on (such as close it again).
Your comments that say 'send cookies' are in the wrong place, the line where you call append is just preparing the request, it only gets sent when you call open.
Related
Site url is http://rajresults.nic.in/resbserx18.htm when send data, but when response comes URL changes in ASP. So which URL user need to send request ASP or html?
Request:
import requests
# data for get result
>>> para = {'roll_no':'2000000','B1':'Submit'}
# this is url where data is entered and get asp response
>>> url = 'http://rajresults.nic.in/resbserx18.htm'
>>> result = requests.post(url,data=para)
>>> result.text
Response
'The page you are looking for cannot be displayed because an invalid method (HTTP verb) is being used.'
Okay after a little bit of work, I found it's some issue with the headers.
I did some trial and error, and found that it checks to make sure the Host header is set.
To debug this, I just incrementally removed chrome's request headers and found which one this web service was particular about.
import requests
headers = {
"Host": "rajresults.nic.in"
}
r = requests.post('http://rajresults.nic.in/resbserx18.asp',
headers = headers,
data = {'roll_no': 2000000, 'B1': 'Submit'}
)
print(r.text)
When I get the Set-Cookie and try to use it, I wont seem that I'm logged in Facebook...
import urllib, urllib2
data = urllib.urlencode({"email":"swagexample#hotmail.com", "pass":"password"})
request = urllib2.Request("http://www.facebook.com/login.php", data)
request.add_header("User-Agent", "Mozilla 5.0")
response = urllib2.urlopen(request)
cookie = response.headers.get("Set-Cookie")
new_request = urllib2.Request("http://www.facebook.com/login.php")
new_request.add_header("User-Agent", "Mozilla 5.0")
new_request.add_header("Cookie", cookie)
new_response = urllib2.urlopen(new_request)
if "Logout" in new_response.read():
print("Logged in.") #No output
Why?
First, Set-Cookie header format is different from Cookie header.
Set-Cookie header contains additional information (doamin, expire, ...), you need to convert them to use it for Cookie header.
cookie = '; '.join(
x.split(';', 1)[0] for x in response.headers.getheaders("Set-Cookie")
)
Even though you do above, you will still not get what you want, because default urllib2 handler does not handle cookie for redirect.
Why don't you use urllib2.HTTPCookieProcessor as you did before?
When I would try to get the set cookie of an response instance I would get an None value when I use my actual login username and password.
import urllib2, urllib, cookielib
jar = cookielib.CookieJar()
cookie = urllib2.HTTPCookieProcessor(jar)
opener = urllib2.build_opener(cookie)
data = urllib.urlencode({'email':'user#hotmail.com','pass':'password','login':'Log+In'})
req = urllib2.Request('http://www.facebook.com/login.php')
response = opener.open(req, data)
response = opener.open(req, data) #I open it twice on purpose
if "Logout" in response.read():
print("Logged In")
else:
print("Not Logged In")
cookie_header = response.headers.get("Set-Cookie")
print(cookie_header)
I know how to set the cookie header, but the problem is a None value is being assigned to cookie_header when I use my actual credentials. How do I get the cookie?
By rearranging the code I was able to fix it up.
response = opener.open(req, data)
cookie_header = response.headers.get("Set-Cookie")
response = opener.open(req, data) #I open it twice on purpose
Because the cookie was set on the first open.
The cookie will be set on the first response, you are testing the second instead. Facebook won't set another cookie here.
You could just get the cookie from the CookieJar object:
cookie = list(cookie.cookiejar)[0]
You'd have a much easier time of it if you used the request library instead:
import requests
session = requests.Session()
data = {'email':'user#hotmail.com','pass':'password','login':'Log+In'}
form = session.get('http://www.facebook.com/login.php')
response = session.post('http://www.facebook.com/login.php', data/data)
cookie_value = session.cookie['datr']
I'm just studying the requests library(http://docs.python-requests.org/en/latest/),
and got a problem on how to fetch a page with cookies using requests.
for example:
url2= 'https://passport.baidu.com'
parsedCookies={'PTOKEN': '412f...', 'BDUSS': 'hnN2...', ...} #Sorry that the cookies value is replaced by ... for instance of privacy
req = requests.get(url2, cookies=parsedCookies)
text=req.text.encode('utf-8','ignore')
f=open('before.html','w')
f.write(text)
f.close()
req.close()
when I use the codes above to fetch the page, it just saves the login page to 'before.html' instead of logined page, it refers that actually I haven't logged in successfully.
But if I use URLlib2 to fetch the page, it works properly as expected.
parsedCookies="PTOKEN=412f...;BDUSS=hnN2...;..." #Different format but same content with the aboved cookies
req = urllib2.Request(url2)
req.add_header('Cookie', parsedCookies)
ret = urllib2.urlopen(req)
f=open('before_urllib2.html','w')
f.write(ret.read())
f.close()
ret.close()
When I use these codes, it saves the logined page in before_urllib2.html.
--
Are there any mistakes in my code?
Any reply would be grateful.
You can use Session object to get what you desire:
url2='http://passport.baidu.com'
session = requests.Session() # create a Session object
cookie = requests.utils.cookiejar_from_dict(parsedCookies)
session.cookies.update(cookie) # set the cookies of the Session object
req = session.get(url2, headers=headers,allow_redirects=True)
If you use the requests.get function, it doesn't send cookies for the redirected page. Instead, if you use the Session().get function, it will maintain and send cookies for all http requests, this is what the concept "session" exactly means.
Let me try to elaborate to you what happens here:
When I sent cookies to http://passport.baidu.com/center and set the parameter allow_redirects as false, the returned status code is 302 and one of the headers of the response is 'location': '/center?_t=1380462657' (This is a dynamic value generated by server, you can replace it with what you get from server):
url2= 'http://passport.baidu.com/center'
req = requests.get(url2, cookies=parsedCookies, allow_redirects=False)
print req.status_code # output 302
print req.headers
But when I set the parameter allow_redirects as True, it still doesn't redirect to the page (http://passport.baidu.com/center?_t=1380462657) and the server return the login page. The reason is that the requests.get doesn't send cookies for the redirected page, here is http://passport.baidu.com/center?_t=1380462657, so we can login successfully. That is why we need the Session object.
If I set url2 = http://passport.baidu.com/center?_t=1380462657, it will return the page you want. One solution is use the above code to get the dynamic location value and form a path to you account like http://passport.baidu.com/center?_t=1380462657 , then you can get the desired page.
url2= 'http://passport.baidu.com' + req.headers.get('location')
req = session.get(url2, cookies=parsedCookies, allow_redirects=True )
But this is cumbersome, so when dealing with cookies, Session object do excellent job for us!
I ran into a problem which I can not solve. I am able to successfully get the cookies and use them after the login to a web application. The problem is that the web application sets new cookies after a couple of clicks which I need to have.
How do I extract, or get the additional, cookies after the login? Here is my code so far:
def _login_to_page(self,url):
cj = cookielib.CookieJar()
cookiehandler = urllib2.HTTPCookieProcessor(cj)
proxy_support = urllib2.ProxyHandler({"https" : self._proxy})
opener = urllib2.build_opener(cookiehandler, proxy_support)
try:
login_post_data = {'op':'login','user':self._username,'passwd':self._password,'api_type':'json'}
response = opener.open(str(self._path_to_login_url), urllib.urlencode(login_post_data), self._request_timeout).read()
if response:
print "[+] Login successful"
self._login_cookies = cj
else:
"[-] Login has probably failed. Wrong Credentials?"
def get_url_loggedin(self,url):
#the variable self._login_cookies are the cookies from the previous login
cookiehandler = urllib2.HTTPCookieProcessor(self._login_cookies)
proxy_support = urllib2.ProxyHandler({"http" : self._proxy})
opener = urllib2.build_opener(cookiehandler, proxy_support)
urllib2.install_opener(opener)
try:
url_response = opener.open(url, None, self._request_timeout).read()
except Exception,e:
print "[-] Could not read page: "
print "[??] Error: " +repr(e)
Sorry if my English is a bit weird I'm not a native speaker.
After the application has set the cookies you want, you should do cj.save( 'cookies.txt' ) to save the current set cookies to that file, and use cj.load('cookies.txt') to load them at application start.
See the cookielib documentation