POST method not working with Python Request - python

I have PTZ camera and I'm trying different way to access that camera via CURL. I'm intended to access preset position in the camera via web interface.
The logic to access preset of PTZ camera, based on browser debugger is like this:
Login using POST method
Select preset position using POST method
Submit using PUT method
Following is source code using shell script:
echo "Set PTZ"
echo $1 #IP address
echo $2 #preset
url_login='http://'$1'/login/login/'
url_preset='http://'$1'/ptz/presets.html'
curl -c cookies.txt -s -X POST $url_login --data "user=admin&pass=admin&forceLogin=on"
curl -b cookies.txt -s -X POST $url_preset --data 'PTZInterface!!ptzPositions='$2
curl -b cookies.txt -s -X PUT $url_preset --data 'autobutton=GotoCurrVirtualPreset&object=PTZInterface&id='
I have succeed using shell script, accessing the camera and go to preset.
But my main purpose is to create a program using python. Following is my python using requests:
import requests
URL_LOGIN = "/login/login/"
PARAMS_LOGIN = {"user": "admin", "pass": "admin", "forceLogin": "on"}
URL_PRESET = "/ptz/presets.html"
HEADERS = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0',
'Accept': '*/*', 'Accept-Language': 'en-US,en;q=0.5',
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'X-Requested-With': 'XMLHttpRequest',
'Connection': 'keep-alive', 'Pragma': 'no-cache', 'Cache-Control': 'no-cache'}
def set_ptz(arg_camera = None, arg_preset = None):
url_login = "http://" + arg_camera + URL_LOGIN
url_preset = "http://" + arg_camera + URL_PRESET
HEADERS['Host'] = arg_camera
HEADERS['Referer'] = 'http://' + arg_camera + '/'
params = {}
params["PTZInterface!!ptzPositions"] = arg_preset
params_put = {}
params_put["autobutton"] = "GotoCurrVirtualPreset"
params_put["object"] = "PTZInterface"
params_put["id"] = ""
s = requests.Session()
r1 = s.post(url_login, data = PARAMS_LOGIN) # Login -> success
var_cookies = r1.cookies
r2 = s.post(url_preset, cookies = var_cookies, headers = HEADERS, data = params) # Post preset position -> failed
r3 = s.put(url_preset, cookies = var_cookies, headers = HEADERS, data = params_put) # Put execution -> success
print r1.headers
print var_cookies
print r2.headers
print r3.headers
print r3.text
print r1.status_code
print r2.status_code
print r3.status_code
set_ptz('10.26.1.3.61', 1)
I'm succeed to login and submit using PUT, but failed to POST the preset position. What's wrong in my python code? I thought that the result should be same.
Thank you for help.

requests is escaping the exclamation points in the POST data:
In [1]: import requests
In [2]: requests.post(..., data={"PTZInterface!!ptzPositions": '1'}).request.body
Out[2]: 'PTZInterface%21%21ptzPositions=1'
cURL just sends them as-is. You can pass data directly as a string:
In [3]: requests.post(..., data="PTZInterface!!ptzPositions=1").request.body
Out[3]: 'PTZInterface!!ptzPositions=1'
Or use urllib.parse.urlencode's safe parameter to build it:
In [13]: urllib.parse.urlencode({'PTZInterface!!ptzPositions': 1}, safe='!')
Out[13]: 'PTZInterface!!ptzPositions=1'

Related

POST request (python) - invalid request

I'm trying to use the API of a media ID registry, the EIDR, to download tv show information. I'd like to be able to query many shows automatically. I'm not experienced in how to use APIs, and the documentation for this specific one is very opaque. I'm using python 3 (requests library) in Ubuntu 16.04.
I tried sending a request for a specific tv show. I took the headers and parameters information from the browser, as in, I did the query from the browser (I looked up 'anderson cooper 360' from this page) and then looked at the information in the "network" tab of the browser's page inspector. I used the following code:
import requests
url = 'https://resolve.eidr.org/EIDR/query/'
headers = {'Host': 'ui.eidr.org', 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:58.0) \
Gecko/20100101 Firefox/58.0', \
'Accept': '*/*', \
'Accept-Language': 'en-US,en;q=0.5', 'Accept-Encoding': 'gzip, deflate, br', \
'Referer': 'https://ui.eidr.org/search/results/19c70c63d73790b86f3fb385f2a9b3f4', \
'Cookie': 'ci_session=f4tnbi8qm7oaq30agjtn8p69j91s4li4; \
_ga=GA1.2.1738620664.1519337357; _gid=GA1.2.1368695940.1519337357; _gat=1', \
'Connection': 'keep-alive'}
params = {'search_page_size':25, 'CustomAsciiSearch[_v]':1, \
'search_type':'content', 'ResourceName[_v]':'anderson cooper 360', \
'AlternateResourceNameAddition[_v]':1, \
'AssociatedOrgAlternateNameAddition[_v]':1, 'Status[_v]':'valid'}
r = requests.post(url, data=params, headers=headers)
print(r.text)
I get this response that basically says it's an invalid request:
<?xml version="1.0" encoding="UTF-8"?><Response \
xmlns="http://www.eidr.org/schema" version="2.1.0"><Status>\
<Code>3</Code><Type>invalid request</Type></Status></Response>
Now, I read in an answer to this Stackoverflow question that I should somehow use a session object. The code suggested in the answer by
Padraic Cunningham was this:
headers = {'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:46.0) \
Gecko/20100101 Firefox/46.0','X-Requested-With': 'XMLHttpRequest', \
"referer": "https://www.hackerearth.com/challenges/"}
with requests.Session() as s:
s.get("https://www.hackerearth.com")
headers["X-CSRFToken"] = s.cookies["csrftoken"]
r = s.post("https://www.hackerearth.com/AJAX/filter-challenges/?modern=true", \
headers=headers, files={'submit': (None, 'True')})
print(r.json())
So I understand that I should somehow use this, but I don't fully understand why or how.
So my question(s) would be:
1) What does 'invalid request' mean in this case?
2) Do you have any suggestions for how to write the request in a way that I can iterate it many times for different items I want to look up?
3) Do you know what I should do to properly use a session object here?
Thank you!
you probably need this documentation.
1) from the documentation:
invalid request: An API (URI) that does not exist including missing a required
parameter. May also include an incorrect HTTP operation on a valid
URI (such as a GET on a registration). Could also be POST multipart
data that is syntactically invalid such as missing required headers or
if the end-of-line characters are not CR-LF.
2) As far as I understand, this API accepts XML requests. See what appears after clicking on 'View XML' on the web page with results (https://ui.eidr.org/search/results). For the 'anderson cooper 360' you can use the XML data in Python like this:
import requests
import xml.etree.ElementTree as ET
url = 'https://resolve.eidr.org/EIDR/query/'
headers = {'Content-Type': 'text/xml',
'Authorization': 'Eidr 10.5238/webui:10.5237/D4C9-7E59:9kDMO4+lpsZGUIl8doWMdw==',
'EIDR-Version': '2.1'}
xml_query = """<Request xmlns="http://www.eidr.org/schema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<Operation>
<Query>
<Expression><![CDATA[ASCII(((/FullMetadata/BaseObjectData/ResourceName "anderson" AND /FullMetadata/BaseObjectData/ResourceName "cooper" AND /FullMetadata/BaseObjectData/ResourceName "360") OR (/FullMetadata/BaseObjectData/AlternateResourceName "anderson" AND /FullMetadata/BaseObjectData/AlternateResourceName "cooper" AND /FullMetadata/BaseObjectData/AlternateResourceName "360")) AND /FullMetadata/BaseObjectData/Status "valid")]]></Expression>
<PageNumber>1</PageNumber>
<PageSize>25</PageSize>
</Query>
</Operation>
</Request>"""
r = requests.post(url, data=xml_query, headers=headers)
root = ET.fromstring(r.text)
for sm in root.findall('.//{http://www.eidr.org/schema}SimpleMetadata'):
print({ch.tag.replace('{http://www.eidr.org/schema}',''):ch.text for ch in sm.getchildren()})
3) I don't think you need the session object.

Web crawler - Python Requests POST not returning data [duplicate]

This question already has an answer here:
How to programmatically send POST request to JSF page without using HTML form?
(1 answer)
Closed 5 years ago.
I'm working on my first web crawler, and I'm trying to get some data of telephone numbers in Mexico, and the website that provides the data is: site, it works with xhr requests.
I have this code so far:
from requests import Request, Session
import xml.etree.ElementTree as ET
import requests
import lxml.etree as etree
url = 'https://sns.ift.org.mx:8081/sns-frontend/consulta-numeracion/numeracion-geografica.xhtml'
s = Session()
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.80 Safari/537.36',
'Content-Type': 'text/html; charset=UTF-8',
}
str1 = s.post(url, headers=headers) #Loading the page
xhtml=str1.text.encode('utf-8')
#Savig the first response, to get the ViewState
text_file = open("loaded.txt", "w")
text_file.write(xhtml)
text_file.close()
x = ET.fromstring(xhtml)
namespace = "{http://www.w3.org/1999/xhtml}"
path = './/*[#id="javax.faces.ViewState"]'
e = x.findall(path.format(namespace))
for i in e:
VS = i.attrib['value'] #ViewState
print VS #ViewState
At this point I get the ViewState of the page, now I send a new POST with the data and the number I want to consult plus the ViewState.
data = {
"javax.faces.partial.ajax": "true",
"javax.faces.source": "FORM_myform:BTN_publicSearch",
"javax.faces.partial.execute": "#all",
"javax.faces.partial.render": "FORM_myform:P_containerConsulta+FORM_myform:P_containerpoblaciones+FORM_myform:P_containernumeracion+FORM_myform:P_containerinfo+FORM_myform:P_containerLocal+FORM_myform:P_containerDesplegable",
"FORM_myform:BTN_publicSearch": "FORM_myform:BTN_publicSearch",
"FORM_myform": "FORM_myform",
"FORM_myform:TXT_NationalNumber": "6564384757",
"javax.faces.ViewState=": VS #ViewState
}
req = s.post(url, data=data, headers=headers)
#Saving the new response, this is supposed to bring the results
text_file = open("Output.txt", "w")
text_file.write(req.text.encode('utf-8'))
text_file.close()
The thing is that the response I get is the full code of the page without the information, and I noticed that it comes with a new ViewState, I believe that's why is not consulting the data.
Also I don't want to use selenium because I don't have a graphic interface in the server, and I need to consult a lot of numbers daily.
...UPDATE...
I believe that the problem relies on JSF, need to know how to handle the data and the JSF values.
In order to use requests to get the data off of a website, you must have this...
r = requests.get(url)
Then after that I would print the results that the 'r' variable gets like so...
print (r)
And then I would use a for loop and treat the text outputted like array (r[0]) and check all of the text for anything that may look like a phone number.
This is just one of the ways that you can do what you are trying to do with your web crawler, and it doesn't use xml at all.
So in all, my code would look like this...
import requests
url = "myurl"
r = requests.get(url)
counter = 0
length = len(r)
while counter != length:
if r[counter] == '1' or r[counter] == '2' or r[counter] == '3' or r[counter] == '4' or r[counter] == '5'or r[counter] == '6' or r[counter] == '7' or r[counter] == '8' or r[counter] == '9' or r[counter] == '0':
data = r[counter:counter+12]
print (data)
counter += 1
You should try with curl, something like
#!/bin/bash
CURL='/usr/bin/curl --connect-timeout 5 --max-time 50'
URL='https://sns.ift.org.mx:8081/sns-frontend/consulta-numeracion/numeracion-geografica.xhtml'
CURLARGS='-sD - -j'
NUM='6564193195'
c_FRONTAPPID="$($CURL $CURLARGS $URL)"
arr=($c_FRONTAPPID)
i=0
for var in "${arr[#]}"
do
if [[ $var == *"FRONTAPPID="* ]]; then
FRONTAPPID=$(echo "$var" | sed 's/.*FRONTAPPID=\(.*\);.*/\1/' | sed 's/!/"'"'"'!'"'"'"/g')
#echo $var
#echo $FRONTAPPID
fi
if [[ $var == *"id=\"javax.faces.ViewState\""* ]]; then
VIEWSTATE=$(echo ${arr[i+1]} | sed 's/.*"\(.*\)".*/\1/')
#echo ${arr[i+1]}
#echo $VIEWSTATE
fi
((i++))
done
($CURL 'https://sns.ift.org.mx:8081/sns-frontend/consulta-numeracion/numeracion-geografica.xhtml' -X POST -H 'Host: sns.ift.org.mx:8081' -H 'Accept: application/xml, text/xml, */*; q=0.01' -H 'Accept-Language: en-US,en;q=0.5' -H 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:56.0) Gecko/20100101 Firefox/56.0' --compressed -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' -H 'Faces-Request: partial/ajax' -H 'X-Requested-With: XMLHttpRequest' -H 'Referer: https://sns.ift.org.mx:8081/sns-frontend/consulta-numeracion/numeracion-geografica.xhtml' -H "Cookie: FRONTAPPID=$FRONTAPPID" -H 'Connection: keep-alive' --data "javax.faces.partial.ajax=true&javax.faces.source=FORM_myform:BTN_publicSearch&javax.faces.partial.execute=#all&javax.faces.partial.render=FORM_myform:P_containerConsulta+FORM_myform:P_containerpoblaciones+FORM_myform:P_containernumeracion+FORM_myform:P_containerinfo+FORM_myform:P_containerLocal+FORM_myform:P_containerDesplegable&FORM_myform:BTN_publicSearch=FORM_myform:BTN_publicSearch&FORM_myform=FORM_myform&FORM_myform:TXT_NationalNumber=$NUM&javax.faces.ViewState=$VIEWSTATE" )

Cookies and http requests

I have this url, the content are produced in this way (php, it's supose to generate a random cookie on every request):
setcookie('token', md5(time()), time()+99999);
if(isset($_COOKIE['token'])) {
echo 'Cookie: ' .$_COOKIE['token'];
die();
}
echo 'Cookie not set yet';
As you can see, the cookie changes on every reload/refresh of the page. Now i have a python (python3) script with three completely independent from each other requests:
import requests
def get_req_data(req):
print('\n\ntoken: ', req.cookies['token'])
print('headers we sent: ', req.request.headers)
print('headers server sent back: ', req.headers)
url = 'http://migueldvl.com/heya/login/tests2.php'
headers = {
"User-agent" : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:7.0.1) Gecko/20100101 Firefox/7.0.1',
"Referer": 'https://www.google.com'
}
req1 = requests.get(url, headers=headers)
get_req_data(req1)
req2 = requests.get(url, headers=headers)
get_req_data(req2)
req3 = requests.get(url, headers=headers)
get_req_data(req3)
How can be that we sometimes have the same cookie in diferent requests? If clearly it's program to change on every request?
If we:
import time
and add a
time.sleep(1) # wait one second before the next request
between requests, the cookie change all the time, this is the right and expected behaviour, but my question is why do we need this (time.sleep(1)) to be certain of the changing cookie? Wouldn't different requests be enough?

Using python to scrape ASP.NET site with id in url

I'm trying to scrape the search results of this ASP.NET website using Python requests to send a POST request. Even though I use a GET request to get the requestverificationtoken and include it in my header I get just get this reply:
{"Token":"Y2VgsmEAAwA","Link":"/search/Y2VgsmEAAwA/"}
which is not the valid link. It's the total search results with no defined arrival data or area as included in my POST request. What am I missing? Who do I scrape a site like this that generates a (session?) ID for the URL?
Thank you so much in advance to all of you!
My python script:
import json
import requests
from bs4 import BeautifulSoup
r = requests.Session()
# GET request
gr = r.get("http://www.feline.dk")
bsObj = BeautifulSoup(gr.text,"html.parser")
auth_string = bsObj.find("input", {"name": "__RequestVerificationToken"})['value']
#print(auth_string)
#print(gr.url)
# POST request
search_request = {
"Geography.Geography":"Danmark",
"Geography.GeographyLong=":"Danmark (Ferieområde)",
"Geography.Id":"da509992-0830-44bd-869d-0270ba74ff62",
"Geography.SuggestionId": "",
"Period.Arrival":"16-1-2016",
"Period.Duration":7,
"Period.ArrivalCorrection":"false",
"Price.MinPrice":None,
"Price.MaxPrice":None,
"Price.MinDiscountPercentage":None,
"Accommodation.MinPersonNumber":None,
"Accommodation.MinBedrooms":None,
"Accommodation.NumberOfPets":None,
"Accommodation.MaxDistanceWater":None,
"Accommodation.MaxDistanceShopping":None,
"Facilities.SwimmingPool":"false",
"Facilities.Whirlpool":"false",
"Facilities.Sauna":"false",
"Facilities.InternetAccess":"false",
"Facilities.SatelliteCableTV":"false",
"Facilities.FireplaceStove":"false",
"Facilities.Dishwasher":"false",
"Facilities.WashingMachine":"false",
"Facilities.TumblerDryer":"false",
"update":"true"
}
payload = {
"searchRequestJson": json.dumps(search_request),
}
header ={
"Accept":"application/json, text/html, */*; q=0.01",
"Accept-Encoding":"gzip, deflate",
"Accept-Language":"da-DK,da;q=0.8,en-US;q=0.6,en;q=0.4",
"Connection":"keep-alive",
"Content-Length":"720",
"Content-Type":"application/x-www-form-urlencoded; charset=UTF-8",
"Cookie":"ASP.NET_SessionId=ebkmy3bzorzm2145iwj3bxnq; __RequestVerificationToken=" + auth_string + "; aid=382a95aab250435192664e80f4d44e0f; cid=google-dk; popout=hidden; __utmt=1; __utma=1.637664197.1451565630.1451638089.1451643956.3; __utmb=1.7.10.1451643956; __utmc=1; __utmz=1.1451565630.1.1.utmgclid=CMWOra2PhsoCFQkMcwod4KALDQ|utmccn=(not%20set)|utmcmd=(not%20set)|utmctr=(not%20provided); BNI_Feline.Web.FelineHolidays=0000000000000000000000009b84f30a00000000",
"Host":"www.feline.dk",
"Origin":"http://www.feline.dk",
#"Referer":"http://www.feline.dk/search/Y2WZNDPglgHHXpe2uUwFu0r-JzExMYi6yif5KNswMDBwMDAAAA/",
"User-Agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36",
"X-Requested-With":"XMLHttpRequest"
}
gr = r.post(
url = 'http://www.feline.dk/search',
data = payload,
headers = header
)
#print(gr.url)
bsObj = BeautifulSoup(gr.text,"html.parser")
print(bsObj)
After multiples tries, I found that your search request is misformatted (need to be URL Encoded and not JSON), and cookies informations are overwrited in headers (Just let session make the work).
I simplified the code like that and I get the desired result
r = requests.Session()
# GET request
gr = r.get("http://www.feline.dk")
bsObj = BeautifulSoup(gr.text,"html.parser")
auth_string = bsObj.find("input", {"name": "__RequestVerificationToken"})['value']
# POST request
search_request = "Geography.Geography=Hou&Geography.GeographyLong=Hou%2C+Danmark+(Ferieomr%C3%A5de)&Geography.Id=847fcbc5-0795-4396-9318-01e638f3b0f6&Geography.SuggestionId=&Period.Arrival=&Period.Duration=7&Period.ArrivalCorrection=False&Price.MinPrice=&Price.MaxPrice=&Price.MinDiscountPercentage=&Accommodation.MinPersonNumber=&Accommodation.MinBedrooms=&Accommodation.NumberOfPets=&Accommodation.MaxDistanceWater=&Accommodation.MaxDistanceShopping=&Facilities.SwimmingPool=false&Facilities.Whirlpool=false&Facilities.Sauna=false&Facilities.InternetAccess=false&Facilities.SatelliteCableTV=false&Facilities.FireplaceStove=false&Facilities.Dishwasher=false&Facilities.WashingMachine=false&Facilities.TumblerDryer=false"
gr = r.post(
url = 'http://www.feline.dk/search/',
data = search_request,
headers = {'Content-Type': 'application/x-www-form-urlencoded'}
)
print(gr.url)
Result :
http://www.feline.dk/search/Y2U5erq-ZSr7NOfJEozPLD5v-MZkw8DAwMHAAAA/
Thank you Kantium for your answer, in my case, i found that the RequestVerificationToken was actually generated in a JS script inside the page.
1 - Call the first page that generates the code, in my case it returned something like this inside the HTML:
<script>
Sys.Net.WebRequestManager.add_invokingRequest(function (sender, networkRequestEventArgs) {
var request = networkRequestEventArgs.get_webRequest();
var headers = request.get_headers();
headers['RequestVerificationToken'] = '546bd932b91b4cdba97335574a263e47';
});
$.ajaxSetup({
beforeSend: function (xhr) {
xhr.setRequestHeader("RequestVerificationToken", '546bd932b91b4cdba97335574a263e47');
},
complete: function (result) {
console.log(result);
},
});
</script>
2 - Grab the RequestVerificationToken code and then add it to your request along with the cookie from set-cookie.
let resp_setcookie = response.headers["set-cookie"];
let rege = new RegExp(/(?:RequestVerificationToken", ')(\S*)'/);
let token = rege.exec(response.body)[1];
I actually store them in a global variable, and later in my Nodejs Request i would add this to the request object:
headers.Cookie = gCookies.cookie;
headers.RequestVerificationToken = gCookies.token;
So that the end request would look something like this:
Remember that you can monitor requests sent using:
require("request-debug")(requestpromise);
Good luck !

Convert a curl POST request to Python only using standard library

I would like to convert this curl command to something that I can use in Python for an existing script.
curl -u 7898678:X -H 'Content-Type: application/json' \
-d '{"message":{"body":"TEXT"}}' http://sample.com/36576/speak.json
TEXT is what i would like to replace with a message generated by the rest of the script.(Which is already working reasonable, although I don't think it is following best practices or particularity reliable. - need to find out how to properly learn to program (ie not use google for assembling stuff))
I would like this to work with the standard library if possible.
I would like this to work with the standard library if possible.
The standard library provides urllib and httplib for working with URLs:
>>> import httplib, urllib
>>> params = urllib.urlencode({'apple': 1, 'banana': 2, 'coconut': 'yummy'})
>>> headers = {"Content-type": "application/x-www-form-urlencoded",
... "Accept": "text/plain"}
>>> conn = httplib.HTTPConnection("example.com:80")
>>> conn.request("POST", "/some/path/to/site", params, headers)
>>> response = conn.getresponse()
>>> print response.status, response.reason
200 OK
If you want to execute curl itself, though, you can just invoke os.system():
import os
TEXT = ...
cmd = """curl -u 7898678:X -H 'Content-Type: application/json'""" \
"""-d '{"message":{"body":"%{t}"}}' http://sample.com/36576/speak.json""" % \
{'t': TEXT}
If you're willing to relax the standard-library-only restriction, you can use PycURL. Beware that it isn't very Pythonic (it's pretty much just a thin veneer over libcurl), and I'm not sure how compatible it is with Python 3.
While there are ways to handle authentication in urllib2, if you're doing Basic Authorization (which means effectively sending the username and password in clear text) then you can do all of what you want with a urllib2.Request and urllib2.urlopen:
import urllib2
def basic_authorization(user, password):
s = user + ":" + password
return "Basic " + s.encode("base64").rstrip()
req = urllib2.Request("http://localhost:8000/36576/speak.json",
headers = {
"Authorization": basic_authorization("7898678", "X"),
"Content-Type": "application/json",
# Some extra headers for fun
"Accept": "*/*", # curl does this
"User-Agent": "my-python-app/1", # otherwise it uses "Python-urllib/..."
},
data = '{"message":{"body":"TEXT"}}')
f = urllib2.urlopen(req)
I tested this with netcat so I could see that the data sent was, excepting sort order, identical in both cases. Here the first one was done with curl and the second with urllib2
% nc -l 8000
POST /36576/speak.json HTTP/1.1
Authorization: Basic Nzg5ODY3ODpY
User-Agent: curl/7.19.4 (universal-apple-darwin10.0) libcurl/7.19.4 OpenSSL/0.9.8k zlib/1.2.3
Host: localhost:8000
Accept: */*
Content-Type: application/json
Content-Length: 27
{"message":{"body":"TEXT"}} ^C
% nc -l 8000
POST /36576/speak.json HTTP/1.1
Accept-Encoding: identity
Content-Length: 27
Connection: close
Accept: */*
User-Agent: my-python-app/1
Host: localhost:8000
Content-Type: application/json
Authorization: Nzg5ODY3ODpY
{"message":{"body":"TEXT"}}^C
(This is slightly tweaked from the output. My test case didn't use the same url path you used.)
There's no need to use the underlying httplib, which doesn't support things that urllib2 gives you like proxy support. On the other hand, I do find urllib2 to be complicated outside of this simple sort of request and if you want better support for which headers are sent and the order they are sent then use httplib.
Thanks every
this works
import urllib2
def speak(status):
def basic_authorization(user, password):
s = user + ":" + password
return "Basic " + s.encode("base64").rstrip()
req = urllib2.Request("http://example.com/60/speak.json",
headers = {
"Authorization": basic_authorization("2345670", "X"),
"Content-Type": "application/json",
"Accept": "*/*",
"User-Agent": "my-python-app/1",
},
data = '{"message":{"body":'+ status +'}}')
f = urllib2.urlopen(req)
speak('Yay')
Take a look at pycurl http://pycurl.sourceforge.net/

Categories