Unable to do HTTP post in Python - python

I'm trying to configure an Access Point(AP) in my office through HTTP Post method via Python.
I was able to login to the AP through Python HTTP Authentication code but when I click on wireless page of the AP to give inputs such as AP SSID, Channel and Passphrase, I'm getting stuck at this point. There is a apply button at the end of the wireless page.
When I'm trying to do that using the below mentioned code, I don't see any changes getting reflected at the AP side. May be my code is wrong or I'm not following the correct procedure to post the data in the AP. How can I resolve this issue?
import urllib2
import requests
def login():
link = "http://192.168.1.11/start_apply2.htm"
username = 'admin'
password = 'admin'
p = urllib2.HTTPPasswordMgrWithDefaultRealm()
p.add_password(None, link, username, password)
handler = urllib2.HTTPBasicAuthHandler(p)
opener = urllib2.build_opener(handler)
urllib2.install_opener(opener)
age = urllib2.urlopen(link).read()
payload = {'wl_ssid_org': 'nick','wl_wpa_psk_org':12345678}
r = requests.get(link)
r = requests.get(link, params=payload)
r = requests.post(link, params=payload)
login()
Note: When I'm running this code, it was throwing error as: 401 unauthorized. When I'm able to login to the AP using same auth code but why I'm unable to clear the authentication here, I'm not getting it.

In your case, you should change
r = requests.post(link, params=payload)
to
r = requests.post(link, data=payload)
Then you can do the POST request successfully.
You should refer to the requests document to find more tutorials.
Even, you can replace the code using urllib2 with code using requests.

Related

How do I verify myself when using Python request posts

I can login and get a website, and post some messages. But on the other hand, some of the post calls failed, and the website return a message, which ask me to login again.
I think the reason is that I do not verify myself while posting, and the server find it and break this connection. But how to? I'm using mac and python 2.7.
I use this code to login:
Connection = requests.session()
result = Connection.post(url, headers = headd, data = data)
and success
This is other post codes after I login:
result = Connection.post(url, headers = headers, data = data)
but failed.
I also tried this:
result = Connection.post(url, headers = headers, data = data, verify=False)
But failed again. The url here is an https website. Does it matter? I mean, how to verify myself if necessary. Because I think it's the website who reject the post and break the session.
try using:
s = requests.session()
s.post(url, headers = headers, data = data)
instead of
Connection.post(...)

Retrieve OAuth code in redirect URL provided as POST response

Python newbie here, so I'm sure this is a trivial challenge...
Using Requests module to make a POST request to the Instagram API in order to obtain a code which is used later in the OAuth process to get an access token. The code is usually accessed on the client-side as it's provided at the end of the redirect URL.
I have tried using Request's response history method, like this (client ID is altered for this post):
OAuthURL = "https://api.instagram.com/oauth/authorize/?client_id=cb0096f08a3848e67355f&redirect_uri=https://www.smashboarddashboard.com/whathappened&response_type=code"
OAuth_AccessRequest = requests.post(OAuthURL)
ResHistory = OAuth_AccessRequest.history
for resp in ResHistory:
print resp.status_code, resp.url
print OAuth_AccessRequest.status_code, OAuth_AccessRequest.url
But the URLs this returns are not revealing the code number string, instead, the redirect just looks like this:
302 https://api.instagram.com/oauth/authorize/?client_id=cb0096f08a3848e67355f&redirect_uri=https://www.dashboard.com/whathappened&response_type=code
200 https://instagram.com/accounts/login/?force_classic_login=&next=/oauth/authorize/%3Fclient_id%cb0096f08a3848e67355f%26redirect_uri%3Dhttps%3A//www.smashboarddashboard.com/whathappened%26response_type%3Dcode
Where if you do this on the client side, using a browser, code would be replaced with the actual number string.
Is there a method or approach I can add to the POST request that will allow me to have access to the actual redirect URL string that appears in the web browser?
It should work in a browser if you are already logged in at Instagram. If you are not logged in you are redirected to a login page:
https://instagram.com/accounts/login/?force_classic_login=&next=/oauth/authorize/%3Fclient_id%3Dcb0096f08a3848e67355f%26redirect_uri%3Dhttps%3A//www.smashboarddashboard.com/whathappened%26response_type%3Dcode
Your Python client is not logged in and so it is also redirected to Instagram's login page as shown by the value of OAuth_AccessRequest.url :
>>> import requests
>>> OAuthURL = "https://api.instagram.com/oauth/authorize/?client_id=cb0096f08a3848e67355f&redirect_uri=https://www.smashboarddashboard.com/whathappened&response_type=code"
>>> OAuth_AccessRequest = requests.get(OAuthURL)
>>> OAuth_AccessRequest
<Response [200]>
>>> OAuth_AccessRequest.url
u'https://instagram.com/accounts/login/?force_classic_login=&next=/oauth/authorize/%3Fclient_id%3Dcb0096f08a3848e67355f%26redirect_uri%3Dhttps%3A//www.smashboarddashboard.com/whathappened%26response_type%3Dcode'
So, to get to the next step, your Python client needs to login. This requires that the client extract and set fields to be posted back to the same URL. It also requires cookies and that the Referer header be properly set. There is a hidden CSRF token that must be extracted from the page (you could use BeautifulSoup for example), and form fields username and password must be set. So you would do something like this:
import requests
from bs4 import BeautifulSoup
OAuthURL = "https://api.instagram.com/oauth/authorize/?client_id=cb0096f08a3848e67355f&redirect_uri=https://www.smashboarddashboard.com/whathappened&response_type=code"
session = requests.session() # use session to handle cookies
OAuth_AccessRequest = session.get(OAuthURL)
soup = BeautifulSoup(OAuth_AccessRequest.content)
form = soup.form
login_data = {form.input.attrs['name'] : form.input['value']}
login_data.update({'username': 'your username', 'password': 'your password'})
headers = {'Referer': OAuth_AccessRequest.url}
login_url = 'https://instagram.com{}'.format(form.attrs['action'])
r = session.post(login_url, data=login_data, headers=headers)
>>> r
<Response [400]>
>>> r.json()
{u'error_type': u'OAuthException', u'code': 400, u'error_message': u'Invalid Client ID'}
Which looks like it will work once provided a valid client ID.
As an alternative, you could look at mechanize which will handle the form submission for you, including the hidden CSRF field:
import mechanize
OAuthURL = "https://api.instagram.com/oauth/authorize/?client_id=cb0096f08a3848e67355f&redirect_uri=https://www.smashboarddashboard.com/whathappened&response_type=code"
br = mechanize.Browser()
br.open(OAuthURL)
br.select_form(nr=0)
br.form['username'] = 'your username'
br.form['password'] = 'your password'
r = br.submit()
response = r.read()
But this doesn't work because the referer header is not being set, however, you could use this method if you can figure out a solution to that.

Unable to retain login credentials across pages while using requests

I am pretty new to using urllib and requests module in python. I am trying to access a wikipage in my company's website which requires me to provide my login credentials through a pop up window when I try to access it through a browser.
I was able to write the following script to successfully access the webpage and read it using the following piece of code:
import sys
import urllib.parse
import urllib.request
import getpass
import http.cookiejar
wiki_page = 'http://wiki.company.com/wiki_page'
top_level_url = 'http://login.company.com/'
username = input("Enter Username: ")
password = getpass.getpass('Enter Password: ')
# Authenticate with login server and fetch the wiki page
password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()
cj = http.cookiejar.CookieJar()
password_mgr.add_password(None, top_level_url, username, password)
handler = urllib.request.HTTPBasicAuthHandler(password_mgr)
opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj),handler)
opener.open(wiki_page)
urllib.request.install_opener(opener)
with urllib.request.urlopen(wiki_page) as response:
# Do something
But now I need to use requests module to do the same. I tried using several methods including sessions but could not get it to work. The following is the piece of code which I think close to the actual solution but it gives Response 200 in the first print and Response 401 in the second print:
s = requests.Session()
print(s.post('http://login.company.com/', auth=(username, password))) # I have tried s.post() as well as s.get() in this line
print(s.get('http://wiki.company.com/wiki_page'))
The site uses the Basic Auth authorization scheme; you'll need to send the login credentials with each request.
Set the Session.auth attribute to a tuple with the username and password on the session:
s = requests.Session()
s.auth = (username, password)
response = s.get('http://wiki.company.com/wiki_page')
print(response.text)
The urllib.request.HTTPPasswordMgrWithDefaultRealm() object would normally only respond to challenges on URLs that start with http://login.company.com/ (so any deeper path will do too), and not send the password elsewhere.
If the simple approach (setting Session.auth) doesn't work, you'll need to find out what response is returned by accessing http://wiki.company.com/wiki_page directly, which is what your original code does. If the server redirects you to a login page, where you then use the Basic Auth information, you can replicate that:
s = requests.Session()
response = s.get('http://wiki.company.com/wiki_page', allow_redirects=False)
if response.status_code in (302, 303):
target = response.headers['location']
authenticated = s.get(target, auth=(username, password))
# continue on to the wiki again
response = s.get('http://wiki.company.com/wiki_page')
You'll have to investigate carefully what responses you get from the server. Open up an interactive console and see what responses you get back. Look at response.status_code and response.headers and response.text for hints. If you leave allow_redirects to the default True, look at response.history to see if there were any intermediate redirections.

python-requests - can't login

trying to scrape some data, but first I need to login. I am trying to use python-requests, and here is my code so far :
login_url = "https://www.wehelpen.nl/login/"
users_url = "https://www.wehelpen.nl/ik-zoek-hulp/hulpprofielen/"
profile_url = "https://www.wehelpen.nl/profiel/01136/hulpvragen/"
uname = "****"
pword = "****"
def main():
s = login(uname, pword, login_url)
page = s.get(users_url)
print makeUTF8(page.text) # grab html and grep for logged in text to make sure!
def login(uname, pword, url):
s = requests.session()
s.get(url, auth=(uname, pword))
csrftoken = s.cookies['csrftoken']
login_data = dict(username=uname, password=pword,
csrfmiddlewaretoken=csrftoken, next='/')
s.post(url, data=login_data, headers=dict(Referer=url))
return s
def makeUTF8(text):
return text.encode('utf-8')
Basically, I need to login at login_url with a POST request (using a csrf token because I get an error otherwise), then using the session object passed back from login(), I want to check that I am logged in by making a GET request to a user page. When I get the return - page.text I can then run a grep command to check for a certain href which tells me if I am logged in or not.
So, thus far I am unable to login and keep a working session object. Can anyone help me? So far, this has been the most tedious python experience of my life.
EDIT. I have searched, searched and searched SO for answers and nothing is working...
You need to have correct names for dictionary keys. Request libary uses html name of form to find right form. In your case those names are identification and password.
login_data = {'identification'=uname,'password'=pswrd...}
There are lots of options, but I have had success using cookielib instead of trying to "manually" handle the cookies.
import urllib2
import cookielib
cookiejar = cookielib.CookieJar()
cookiejar.clear()
urlOpener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar))
# ...etc...
Some potentially relevant answers on getting this set up are on SO, including: https://stackoverflow.com/a/5826033/1681480

Login to website using python

I am trying to login to this page using Python.
I tried using the steps described on this other Stack Overflow post, and got the following code:
import urllib, urllib2, cookielib
username = 'username'
password = 'password'
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
login_data = urllib.urlencode({'username' : username, 'j_password' : password})
opener.open('http://friends.cisv.org/index.cfm', login_data)
resp = opener.open('http://friends.cisv.org/index.cfm?fuseaction=activities.list')
print resp.read()
but that gave me the following output:
<SCRIPT LANGUAGE="JavaScript">
alert('Sorry. You need to log back in to continue. You will be returned to the home page when you click on OK.');
document.location.href='index.cfm';
</SCRIPT>
What am I doing wrong?
I would recommend using the wonderful requests module.
The code below will get you logged into the site and persist the cookies for the duration of the session.
import requests
import sys
EMAIL = ''
PASSWORD = ''
URL = 'http://friends.cisv.org'
def main():
# Start a session so we can have persistant cookies
session = requests.session(config={'verbose': sys.stderr})
# This is the form data that the page sends when logging in
login_data = {
'loginemail': EMAIL,
'loginpswd': PASSWORD,
'submit': 'login',
}
# Authenticate
r = session.post(URL, data=login_data)
# Try accessing a page that requires you to be logged in
r = session.get('http://friends.cisv.org/index.cfm?fuseaction=user.fullprofile')
if __name__ == '__main__':
main()
The term "login" is unfortunately very vague. The code given here obviously tried to log in using HTTP basic authentication. I'd wager a guess that this site wants you to send it a username and password in some kind of POST form (that's how most web-based login forms work). In this case, you'd need to send the proper POST request, and keep whatever cookies it sent back to you for future requests. Unfortunately I don't know what this would be, it depends on the site. You'll need to figure out how it normally logs a user in and try to follow that pattern.

Categories