I have created a script that takes two text files "username.txt" and "password.txt". My script try's to log in with the given data provided in the 2 text files.I have I have user1#mymail.com to user10#mymail.com in the username.txt file and password1 to password10 in the password.txt. If it succeeds to log in it ought to give me a HTTP request code 200, if unsuccessful it should give me a code 400. My code is running the first line only and doesn't run the rest. How can I fix this issue. Here is my code.
import urllib, urllib2
user = open ('users.txt' , 'r')
password = open ('password.txt' , 'r')
pa = ''.join(password)
for users in user:
login_data = pa + users
base_url = 'http://mymail.com'
# login action we want to post data to
response = urllib2.urlopen(base_url)
login_action = '/auth/login'
login_action = base_url + login_action
response = urllib2.urlopen(login_action, login_data)
response.read()
print response.headers
print response.getcode()
Here is my output when I run the script. Mark I have set the users that are supposed to fail but I am getting a code 200.
Date: Mon, 29 Jul 2013 14:54:59 GMT
Server: Apache
X-Powered-By: PHP/5.3.3
Set-Cookie: PHPSESSID=o3jlu86jgs7uj24fod107aps26; path=/
Cache-Control: no-cache
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8
200
All I had to do was make sure that my loop goes back to the beginning of the nested loop and use the .seek functionality and it loads all passwords and try's them out.
import urllib, urllib2
user = open ('users.txt' , 'r')
password = open ('password.txt' , 'r')
for users in user:
password.seek(0)
for pass_list in password:
login_data = users + '\n' + pass_list
print login_data
base_url = 'http://my-site.com'
#login action we want to post data to
response = urllib2.urlopen(base_url)
login_action = '/auth/login'
login_action = base_url + login_action
response = urllib2.urlopen(login_action, login_data)
response.read()
print response.headers
print response.getcode()
Related
I am on the hook to write a Python script to interact with a remote web server with http. Here is the server (name: username; password: passw0rd), basically I will need to upload an image to the remote server, and printout its analysis output.
I have almost zero knowledge on Python network programming and really have no idea how this can be worked out. Could anyone shed some lights on where should I start to write such a script? I can find the following http post request from chrome, but just have no idea how to proceed further:
POST /post HTTP/1.1
Host: 34.65.71.65
Connection: keep-alive
Content-Length: 3185
Cache-Control: max-age=0
Authorization: Basic dXNlcm5hbWU6cGFzc3cwcmQ=
Origin: http://34.65.71.65
Upgrade-Insecure-Requests: 1
Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryUPXn3eOKoasOQMwW
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3
Referer: http://34.65.71.65/post
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.9,zh-CN;q=0.8,zh;q=0.7
This is the Python script I am writing right now:
import requests
# defining the api-endpoint
API_ENDPOINT = "http://34.65.71.65/post"
# your API key here
username = "username"
pwd = "passw0rd"
path = "./kite.png"
image_path = path
# Read the image into a byte array
image_data = open(image_path, "rb").read()
# data to be sent to api
data = image_data
# sending post request and saving response as response object
# r = requests.post(url = API_ENDPOINT, auth=(username, pwd), data = data)
r = requests.post(url = API_ENDPOINT, auth=(username, pwd), data = data)
# extracting response text
pastebin_url = r.text
print("The pastebin URL is:%s"%pastebin_url)
but somehow it triggers the following issue:
requests.exceptions.ConnectionError: ('Connection aborted.', BrokenPipeError(32, 'Broken pipe'))
And here is another trial:
import requests
# defining the api-endpoint
API_ENDPOINT = "http://34.65.71.65/post"
# your API key here
username = "username"
pwd = "passw0rd"
path = "./kite.png"
with open(path, 'rb') as file:
body = {'foo': 'bar'}
body_file = {'file_field': file}
response = requests.post(API_ENDPOINT, auth=(username, pwd), data=body, files=body_file)
print(response.content) # Prints result
Making an HTTP request in Python is very easy thanks to the requests API. Uploading a file requires you to read it first and then upload it in the body of a POST request.
The broken PIPE error often occurs when the server closes the connection before the client could send all the data. Which is often due to an inconsistency between the content size announced in the headers and the real content size. To resolve this, you should read the file as 'r' or 'rb' (if it is binary) and use requests API files kwargs to send the file.
import requests
with open(file.name, 'rb') as file:
body = {'foo': 'bar'}
body_file = {'file_field': file}
response = requests.post('your.url.example', data=body, files=body_file)
print(response.content) # Prints result
I recently stumbled across some documentation for the unofficial Pandora API.
I decided to try this with Python 3.
After heading to the Authentication page I saw that I first had to verfiy that the service was available in my country so I did this.
import requests
import urllib
url = "http://internal-tuner.pandora.com/services/json/?method=test.checkLicensing"
res = requests.post(url)
print(res.status_code)
print(res.content)
It prints out:
<Response [200]>
b'{"stat":"ok","result":{"isAllowed":true}}'
Right. So I'm allowed to use the partner service.
Next I saw that I had to get a Partner Login.
So I got the info it said I needed from the Partners page.
Note this is not my login info. This was partner info I was told to choose from in the documentation.
username = "android"
password = "AC7IBG09A3DTSYM4R41UJWL07VLN8JI7"
deviceModel = "android-generic"
Next, the documentation says to send a post request to one of the following links as the base url:
http://tuner.pandora.com/services/json/
https://tuner.pandora.com/services/json/
http://internal-tuner.pandora.com/services/json/
https://internal-tuner.pandora.com/services/json/
Now to encode the url parameters and put them after the base url.
It says I should take the above username, password, deviceModel, the method I want to call (for partner login it says it is "auth.PartnerLogin", and the version (it says pass in the string "5") and url encode them.
So I set up the url params in urlencoded format and fire off a POST request:
import requests
import urllib
url = "http://internal-tuner.pandora.com/services/json/?"
username = "android"
password = "AC7IBG09A3DTSYM4R41UJWL07VLN8JI7"
deviceModel = "android-generic"
data = {
"method": "auth.partnerLogin",
"username": username,
"password": password,
"deviceModel": deviceModel,
"version": "5"
}
url += urllib.parse.urlencode(data)
res = requests.post(url)
print("url:", url)
print("response:", res)
print("content:", res.content)
But when I do it prints this out and tells me there was an error:
url: http://internal-tuner.pandora.com/services/json/?method=auth.partnerLogin&username=android&password=AC7IBG09A3DTSYM4R41UJWL07VLN8JI7&deviceModel=android-generic&version=5
response: <Response [200]>
content: b'{"stat":"fail","message":"An unexpected error occurred","code":9}'
Has anyone else used this Api before?
Why am I getting an error? Am I missing something here?
Apparently pithos uses this api, and it is loading music fine for me.
Can anybody point me in the right direction here please?
Looks like you passing data as parameters and using incorrect url.
Proper curl request:
########## REQUEST ##########
curl -i --data '{ "username": "android", "password": "AC7IBG09A3DTSYM4R41UJWL07VLN8JI7", "deviceModel": "android-generic", "version": "5", "includeUrls": true }' -X POST 'https://tuner.pandora.com:443/services/json/?method=auth.partnerLogin' -H "Content-Type: application/json" -A 'pinobar'
########## OUTPUT ##########
HTTP/1.1 200 OK
Date: Thu, 04 Jan 2018 03:46:54 GMT
Server: Apache
Content-Type: text/plain; charset=utf-8
Content-Length: 741
Cache-Control: must-revalidate, max-age=0
Expires: -1
Vary: Accept-Encoding
{"stat":"ok","result":{"syncTime":"f6f071bb4b886bc3545fbd66701b8d38","deviceProperties":{"followOnAdRefreshInterval":3,"ooyala":{"streamingPercentage":0,"streamingWhitelist":[534051315],"videoAdBufferRetryCount":3,"videoAdLoadingTimeout":2,"videoAdPlayTimeout":8},"videoAdUniqueInterval":0,"videoAdStartInterval":180,"optionalFeatures":{"optionalFeature":[{"feature":"useAudioProxy2","enabled":"false","platformVersionRange":{"low":"4.0","high":"5.0.0"},"productVersionRange":{"low":"1.6","high":"*"}}]},"adRefreshInterval":3,"videoAdRefreshInterval":870},"partnerAuthToken":"VADEjNzUq9Ew9HUkIzUT489kVe9kjo0If3","partnerId":"42","stationSkipUnit":"hour","urls":{"autoComplete":"http://autocomplete.pandora.com/search"},"stationSkipLimit":6}}
I would suggest use urllib2 sample.
Here working sample for our case:
import json
import urllib2
username = "android"
password = "AC7IBG09A3DTSYM4R41UJWL07VLN8JI7"
deviceModel = "android-generic"
url = "https://tuner.pandora.com:443/services/json/?method=auth.partnerLogin"
values = {
"username" : username,
"password" : password,
"deviceModel": deviceModel,
"version" : "5"
}
data = json.dumps(values)
headers = {'content-type': 'application/json'}
req = urllib2.Request(url, data, headers)
response = urllib2.urlopen(req)
content = response.read()
print("data:", data)
print("url:", url)
print("response:", response)
print("content:", content)
Output:
('url:', 'https://tuner.pandora.com:443/services/json/?method=auth.partnerLogin')
('response:', <addinfourl at 4509594832 whose fp = <socket._fileobject object at 0x10c7c0bd0>>)
('content:', '{"stat":"ok","result":{"stationSkipLimit":6,"partnerId":"42","partnerAuthToken":"VAEIniGnwSV1exsWHgUcsQgV5HA63B1nFA","syncTime":"4663310634ae885f45f489b2ab918a66","deviceProperties":{"followOnAdRefreshInterval":3,"ooyala":{"streamingPercentage":0,"streamingWhitelist":[534051315],"videoAdBufferRetryCount":3,"videoAdLoadingTimeout":2,"videoAdPlayTimeout":8},"videoAdUniqueInterval":0,"videoAdStartInterval":180,"optionalFeatures":{"optionalFeature":[{"feature":"useAudioProxy2","enabled":"false","platformVersionRange":{"low":"4.0","high":"5.0.0"},"productVersionRange":{"low":"1.6","high":"*"}}]},"adRefreshInterval":3,"videoAdRefreshInterval":870},"stationSkipUnit":"hour"}}')
This question was posted on StackApps, but the issue may be more a programming issue than an authentication issue, hence it may deserve a better place here.
I am working on an desktop inbox notifier for StackOverflow, using the API with Python.
The script I am working on first logs the user in on StackExchange, and then requests authorisation for the application. Assuming the application has been authorised through web-browser interaction of the user, the application should be able to make requests to the API with authentication, hence it needs the access token specific to the user. This is done with the URL: https://stackexchange.com/oauth/dialog?client_id=54&scope=read_inbox&redirect_uri=https://stackexchange.com/oauth/login_success.
When requesting authorisation via the web-browser the redirect is taking place and an access code is returned after a #. However, when requesting this same URL with Python (urllib2), no hash or key is returned in the response.
Why is it my urllib2 request is handled differently from the same request made in Firefox or W3m? What should I do to programmatically simulate this request and retrieve the access_token?
Here is my script (it's experimental) and remember: it assumes the user has already authorised the application.
#!/usr/bin/env python
import urllib
import urllib2
import cookielib
from BeautifulSoup import BeautifulSoup
from getpass import getpass
# Define URLs
parameters = [ 'client_id=54',
'scope=read_inbox',
'redirect_uri=https://stackexchange.com/oauth/login_success'
]
oauth_url = 'https://stackexchange.com/oauth/dialog?' + '&'.join(parameters)
login_url = 'https://openid.stackexchange.com/account/login'
submit_url = 'https://openid.stackexchange.com/account/login/submit'
authentication_url = 'http://stackexchange.com/users/authenticate?openid_identifier='
# Set counter for requests:
counter = 0
# Build opener
jar = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(jar))
def authenticate(username='', password=''):
'''
Authenticates to StackExchange using user-provided username and password
'''
# Build up headers
user_agent = 'Mozilla/5.0 (Ubuntu; X11; Linux i686; rv:8.0) Gecko/20100101 Firefox/8.0'
headers = {'User-Agent' : user_agent}
# Set Data to None
data = None
# 1. Build up URL request with headers and data
request = urllib2.Request(login_url, data, headers)
response = opener.open(request)
# Build up POST data for authentication
html = response.read()
fkey = BeautifulSoup(html).findAll(attrs={'name' : 'fkey'})[0].get('value').encode()
values = {'email' : username,
'password' : password,
'fkey' : fkey}
data = urllib.urlencode(values)
# 2. Build up URL for authentication
request = urllib2.Request(submit_url, data, headers)
response = opener.open(request)
# Check if logged in
if response.url == 'https://openid.stackexchange.com/user':
print ' Logged in! :) '
else:
print ' Login failed! :( '
# Find user ID URL
html = response.read()
id_url = BeautifulSoup(html).findAll('code')[0].text.split('"')[-2].encode()
# 3. Build up URL for OpenID authentication
data = None
url = authentication_url + urllib.quote_plus(id_url)
request = urllib2.Request(url, data, headers)
response = opener.open(request)
# 4. Build up URL request with headers and data
request = urllib2.Request(oauth_url, data, headers)
response = opener.open(request)
if '#' in response.url:
print 'Access code provided in URL.'
else:
print 'No access code provided in URL.'
if __name__ == '__main__':
username = raw_input('Enter your username: ')
password = getpass('Enter your password: ')
authenticate(username, password)
To respond to comments below:
Tamper data in Firefox requests the above URL (as oauth_url in the code) with the following headers:
Host=stackexchange.com
User-Agent=Mozilla/5.0 (Ubuntu; X11; Linux i686; rv:9.0.1) Gecko/20100101 Firefox/9.0.1
Accept=text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language=en-us,en;q=0.5
Accept-Encoding=gzip, deflate
Accept-Charset=ISO-8859-1,utf-8;q=0.7,*;q=0.7
Connection=keep-alive
Cookie=m=2; __qca=P0-556807911-1326066608353; __utma=27693923.1085914018.1326066609.1326066609.1326066609.1; __utmb=27693923.3.10.1326066609; __utmc=27693923; __utmz=27693923.1326066609.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); gauthed=1; ASP.NET_SessionId=nt25smfr2x1nwhr1ecmd4ok0; se-usr=t=z0FHKC6Am06B&s=pblSq0x3B0lC
In the urllib2 request the header provides the user-agent value only. The cookie is not passed explicitly, but the se-usr is available in the cookie jar at the time of the request.
The response headers will be first the redirect:
Status=Found - 302
Server=nginx/0.7.65
Date=Sun, 08 Jan 2012 23:51:12 GMT
Content-Type=text/html; charset=utf-8
Connection=keep-alive
Cache-Control=private
Location=https://stackexchange.com/oauth/login_success#access_token=OYn42gZ6r3WoEX677A3BoA))&expires=86400
Set-Cookie=se-usr=t=kkdavslJe0iq&s=pblSq0x3B0lC; expires=Sun, 08-Jul-2012 23:51:12 GMT; path=/; HttpOnly
Content-Length=218
Then the redirect will take place through another request with the fresh se-usr value from that header.
I don't know how to catch the 302 in urllib2, it handles it by itself (which is great). It would be nice however to see if the access token as provided in the location header would be available.
There's nothing special in the last response header, both Firefox and Urllib return something like:
Server: nginx/0.7.65
Date: Sun, 08 Jan 2012 23:48:16 GMT
Content-Type: text/html; charset=utf-8
Connection: close
Cache-Control: private
Content-Length: 5664
I hope I didn't provide confidential info, let me know if I did :D
The token does not appear because of the way urllib2 handles the redirect. I am not familiar with the details so I won't elaborate here.
The solution is to catch the 302 before the urllib2 handles the redirect. This can be done by sub-classing the urllib2.HTTPRedirectHandler to get the redirect with its hashtag and token. Here is a short example of subclassing the handler:
class MyHTTPRedirectHandler(urllib2.HTTPRedirectHandler):
def http_error_302(self, req, fp, code, msg, headers):
print "Going through 302:\n"
print headers
return urllib2.HTTPRedirectHandler.http_error_302(self, req, fp, code, msg, headers)
In the headers the location attribute will provide the redirect URL in full length, i.e. including the hashtag and token:
Output extract:
...
Going through 302:
Server: nginx/0.7.65
Date: Mon, 09 Jan 2012 20:20:11 GMT
Content-Type: text/html; charset=utf-8
Connection: close
Cache-Control: private
Location: https://stackexchange.com/oauth/login_success#access_token=K4zKd*HkKw5Opx(a8t12FA))&expires=86400
Content-Length: 218
...
More on catching redirects with urllib2 on StackOverflow (of course).
I'm making a request using urllib2 and the HTTPBasicAuthHandler like so:
import urllib2
theurl = 'http://someurl.com'
username = 'username'
password = 'password'
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
passman.add_password(None, theurl, username, password)
authhandler = urllib2.HTTPBasicAuthHandler(passman)
opener = urllib2.build_opener(authhandler)
urllib2.install_opener(opener)
params = "foo=bar"
response = urllib2.urlopen('http://someurl.com/somescript.cgi', params)
print response.info()
I'm currently getting a httplib.BadStatusLine exception when running this code. How could I go about debugging? Is there a way to see what the raw response is regardless of the unrecognized HTTP status code?
Have you tried setting the debug level in your own HTTP handler? Change your code to something like this:
>>> import urllib2
>>> handler=urllib2.HTTPHandler(debuglevel=1)
>>> opener = urllib2.build_opener(handler)
>>> urllib2.install_opener(opener)
>>> resp=urllib2.urlopen('http://www.google.com').read()
send: 'GET / HTTP/1.1
Accept-Encoding: identity
Host: www.google.com
Connection: close
User-Agent: Python-urllib/2.7'
reply: 'HTTP/1.1 200 OK'
header: Date: Sat, 08 Oct 2011 17:25:52 GMT
header: Expires: -1
header: Cache-Control: private, max-age=0
header: Content-Type: text/html; charset=ISO-8859-1
... the remainder of the send / reply other than the data itself
So the three lines to prepend are:
handler=urllib2.HTTPHandler(debuglevel=1)
opener = urllib2.build_opener(handler)
urllib2.install_opener(opener)
... the rest of your urllib2 code...
That will show the raw HTTP send / reply cycle on stderr.
Edit from comment
Does this work?
... same code as above this line
opener=urllib2.build_opener(authhandler, urllib2.HTTPHandler(debuglevel=1))
... rest of your code
I'm trying to make a POST request to retrieve information about a book.
Here is the code that returns HTTP code: 302, Moved
import httplib, urllib
params = urllib.urlencode({
'isbn' : '9780131185838',
'catalogId' : '10001',
'schoolStoreId' : '15828',
'search' : 'Search'
})
headers = {"Content-type": "application/x-www-form-urlencoded",
"Accept": "text/plain"}
conn = httplib.HTTPConnection("bkstr.com:80")
conn.request("POST", "/webapp/wcs/stores/servlet/BuybackSearch",
params, headers)
response = conn.getresponse()
print response.status, response.reason
data = response.read()
conn.close()
When I try from a browser, from this page: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackMaterialsView?langId=-1&catalogId=10001&storeId=10051&schoolStoreId=15828 , it works. What am I missing in my code?
EDIT:
Here's what I get when I call print response.msg
302 Moved Date: Tue, 07 Sep 2010 16:54:29 GMT
Vary: Host,Accept-Encoding,User-Agent
Location: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch
X-UA-Compatible: IE=EmulateIE7
Content-Length: 0
Content-Type: text/plain; charset=utf-8
Seems that the location points to the same url I'm trying to access in the first place?
EDIT2:
I've tried using urllib2 as suggested here. Here is the code:
import urllib, urllib2
url = 'http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch'
values = {'isbn' : '9780131185838',
'catalogId' : '10001',
'schoolStoreId' : '15828',
'search' : 'Search' }
data = urllib.urlencode(values)
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
print response.geturl()
print response.info()
the_page = response.read()
print the_page
And here is the output:
http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch
Date: Tue, 07 Sep 2010 16:58:35 GMT
Pragma: No-cache
Cache-Control: no-cache
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Set-Cookie: JSESSIONID=0001REjqgX2axkzlR6SvIJlgJkt:1311s25dm; Path=/
Vary: Accept-Encoding,User-Agent
X-UA-Compatible: IE=EmulateIE7
Content-Length: 0
Connection: close
Content-Type: text/html; charset=utf-8
Content-Language: en-US
Set-Cookie: TSde3575=225ec58bcb0fdddfad7332c2816f1f152224db2f71e1b0474c866f3b; Path=/
Their server seems to want you to acquire the proper cookie. This works:
import urllib, urllib2, cookielib
cookie_jar = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookie_jar))
urllib2.install_opener(opener)
# acquire cookie
url_1 = 'http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackMaterialsView?langId=-1&catalogId=10001&storeId=10051&schoolStoreId=15828'
req = urllib2.Request(url_1)
rsp = urllib2.urlopen(req)
# do POST
url_2 = 'http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch'
values = dict(isbn='9780131185838', schoolStoreId='15828', catalogId='10001')
data = urllib.urlencode(values)
req = urllib2.Request(url_2, data)
rsp = urllib2.urlopen(req)
content = rsp.read()
# print result
import re
pat = re.compile('Title:.*')
print pat.search(content).group()
# OUTPUT: Title: Statics & Strength of Materials for Arch (w/CD)<br />
You might want to use the urllib2 module which should handle redirects better. Here's an example of POSTING with urllib2.
Perhaps that's what the browser gets, and you'll just have to follow the 302 redirect.
If all else fails, you can monitor the dialogue between Firefox and the Web Server using FireBug or tcpdump or wireshark, and see which HTTP headers are different. Possibly it's just the User Agent: header.