Travelport Galileo python SoapClient - python

I need to develop python soapclient for Travelport Galileo uAPI.
This is 30-day trial credentials for Travelport Universal API
Universal API User ID: Universal API/uAPI2514620686-0edbb8e4
Universal API Password: D54HWfck9nRZNPbXmpzCGwc95
Branch Code for Galileo (1G): P7004130
URLs: https://emea.universal-api.pp.travelport.com/B2BGateway/connect/uAPI/
This is quote from documentation galileo
HTTP Header
The HTTP header includes:
SOAP endpoints, which vary by:
Geographical region.
Requested service. In the preceding example, the HotelService is used for the endpoint; however, the service name is modified based on the request transaction.
gzip compression, which is optional, but strongly recommended. To accept gzip compression in the response, specify “Accept-Encoding: gzip,deflate” in the header.
Authorization, which follows the standard basic authorization pattern.
The text that follows “Authorization: Basic” can be encoded using Base 64. This functionality is supported by most programming languages.
The syntax of the authorization credentials must include the prefix "Universal API/" before the User Name and Password assigned by Travelport.
POST https://americas.universal-api.pp.travelport.com/
B2BGateway/connect/uAPI/HotelService HTTP/2.0
Accept-Encoding: gzip,deflate
Content-Type: text/xml;charset=UTF-8
SOAPAction: ""
Authorization: Basic UniversalAPI/UserName:Password
Content-Length: length
This is i my python code
import urllib2
import base64
import suds
class HTTPSudsPreprocessor(urllib2.BaseHandler):
def http_request(self, req):
message = \
"""
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:air="http://www.travelport.com/schema/air_v16_0" xmlns:com="http://www.travelport.com/schema/common_v13_0" -->
<soapenv:header>
<soapenv:body>
<air:availabilitysearchreq xmlns:air="http://www.travelport.com/schema/air_v16_0" xmlns:com="http://www.travelport.com/schema/common_v13_0" authorizedby="Test" targetbranch="P7004130">
<air:searchairleg>
<air:searchorigin>
<com:airport code="LHR">
</com:airport></air:searchorigin>
<air:searchdestination>
<com:airport code="JFK">
</com:airport></air:searchdestination>
<air:searchdeptime preferredtime="2011-11-08">
</air:searchdeptime></air:searchairleg>
</air:availabilitysearchreq>
</soapenv:body>
"""
auth = base64.b64encode('Universal API/uAPI2514620686-0edbb8e4:D54HWfck9nRZNPbXmpzCGwc95')
req.add_header('Content-Type', 'text/xml; charset=utf-8')
req.add_header('Accept', 'gzip,deflate')
req.add_header('Cache-Control','no-cache')
req.add_header('Pragma', 'no-cache')
req.add_header('SOAPAction', '')
req.add_header('Authorization', 'Basic %s'%(auth))
return req
https_request = http_request
URL = "https://emea.universal-api.pp.travelport.com/B2BGateway/connect/uAPI/"
https = suds.transport.https.HttpTransport()
opener = urllib2.build_opener(HTTPSudsPreprocessor)
https.urlopener = opener
suds.client.Client(URL, transport = https)
But it is not working.
Traceback (most recent call last):
File "soap.py", line 42, in <module>
suds.client.Client(URL, transport = https)
File "/usr/local/lib/python2.7/site-packages/suds/client.py", line 112, in __init__
self.wsdl = reader.open(url)
File "/usr/local/lib/python2.7/site-packages/suds/reader.py", line 152, in open
d = self.fn(url, self.options)
File "/usr/local/lib/python2.7/site-packages/suds/wsdl.py", line 136, in __init__
d = reader.open(url)
File "/usr/local/lib/python2.7/site-packages/suds/reader.py", line 79, in open
d = self.download(url)
File "/usr/local/lib/python2.7/site-packages/suds/reader.py", line 95, in download
fp = self.options.transport.open(Request(url))
File "/usr/local/lib/python2.7/site-packages/suds/transport/http.py", line 64, in open
raise TransportError(str(e), e.code, e.fp)
suds.transport.TransportError: HTTP Error 500: Dynamic backend host not specified
I'm trying to solve this problem for the past 2 weeks, so if you can, please advise me solution.

I think you can try to download WSDL files in ZIP archive from this url https://support.travelport.com/webhelp/uAPI/uAPI.htm#Getting_Started/Universal_API_Schemas_and_WSDLs.htm
So you will be able to generate your client classes using those WSDL files, because there is no WSDL endpoint on the https://emea.universal-api.pp.travelport.com/B2BGateway/connect/uAPI/
(like ?wsdl or /.wsdl)

Related

ServiceNow - How to use SOAP to download reports

I need to automate download of reports from serviceNow.
I've been able to automate it using python and selenium and win32com by following method.
https://test.service-now.com/sys_report_template.do?CSV&jvar_report_id=92a....7aa
And using selenium to access serviceNow as well as modify firefox default download option to dump the file to a folder on windows machine.
However, Since all of this may be ported to a linux server , we would like to port it to SOAP or CURL.
I came across serviceNow libraries for python here.
I tried it out and following code is working if I set login , password and instance-name as listed at the site using following from ServiceNow.py
class Change(Base):
__table__ = 'change_request.do'
and following within clientside script as listed on site.
# Fetch changes updated on the last 5 minutes
changes = chg.last_updated(minutes=5)
#print changes client side script.
for eachline in changes:
print eachline
However, When I replace URL with sys_report_template.do, I am getting error
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\SOAPpy\Parser.py", line 1080, in _parseSOAP
parser.parse(inpsrc)
File "C:\Python27\Lib\xml\sax\expatreader.py", line 107, in parse
xmlreader.IncrementalParser.parse(self, source)
File "C:\Python27\Lib\xml\sax\xmlreader.py", line 125, in parse
self.close()
File "C:\Python27\Lib\xml\sax\expatreader.py", line 220, in close
self.feed("", isFinal = 1)
File "C:\Python27\Lib\xml\sax\expatreader.py", line 214, in feed
self._err_handler.fatalError(exc)
File "C:\Python27\Lib\xml\sax\handler.py", line 38, in fatalError
raise exception
SAXParseException: <unknown>:1:0: no element found
Here is relevent code
from servicenow import ServiceNow
from servicenow import Connection
from servicenow.drivers import SOAP
# For SOAP connection
conn = SOAP.Auth(username='abc', password='def', instance='test')
rpt = ServiceNow.Base(conn)
rpt.__table__ = "sys_report_template.do?CSV"
#jvar_report_id replaced with .... to protect confidentiality
report = rpt.fetch_one({'jvar_report_id': '92a6760a......aas'})
for eachline in report:
print eachline
So, my question is , what can be done to make this work?
I looked on web for resources and help, but didn't find any.
Any help is appreciated.
After much research I was able to use following method to get report in csv format from servicenow. I thought I will post over here in case anyone else runs into similar issue.
import requests
import json
# Set the request parameters
url= 'https://myinstance.service-now.com/sys_report_template.do?CSV&jvar_report_id=929xxxxxxxxxxxxxxxxxxxx0c755'
user = 'my_username'
pwd = 'my_password'
# Set proper headers
headers = {"Accept":"application/json"}
# Do the HTTP request
response = requests.get(url, auth=(user, pwd), headers=headers )
response.raise_for_status()
print response.text
response.text now has report in csv format.
I need to next figure out, how to parse the response object to extract csv data in correct format.
Once done, I will post over here. But for now this answers my question.
I tried this and its working as expected.
`import requests
import json
url= 'https://myinstance.service-now.com/sys_report_template.do?CSV&jvar_report_id=929xxxxxxxxxxxxxxxxxxxx0c755'
user = 'my_username'
pwd = 'my_password'
response = requests.get(url, auth=(user, pwd), headers=headers )
file_name = "abc.csv"
with open(file_name, 'wb') as out_file:
out_file.write(response.content)
del response`

Suds throwing 403 Forbidden on SharePoint

I'm having an issue with getting suds to authenticate using the python-ntlm package against a SharePoint 2010 site. I have tried the solutions linked in this thread with no luck.
I'm running Python 2.7.10, suds 0.4, and python-ntlm 1.1.0.
Here's my code:
from suds.client import *
from suds.transport.https import WindowsHttpAuthenticated
import logging
logging.basicConfig(level=logging.INFO)
logging.getLogger('suds.client').setLevel(logging.DEBUG)
logging.getLogger('suds.transport').setLevel(logging.DEBUG)
logging.getLogger('ntlm').setLevel(logging.DEBUG)
url = "https://web.site.com/sites/Collection/_vti_bin/Lists.asmx?WSDL"
ntlm = WindowsHttpAuthenticated(username='DOMAIN\UserName',
password='Password')
client = Client(url, transport=ntlm)
client.service.GetListCollection()
Here's the debug output:
DEBUG:suds.transport.http:opening (https://web.site.com/sites/Collection/_vti_bin/Lists.asmx?WSDL)
DEBUG:suds.transport.http:opening (http://www.w3.org/2001/XMLSchema.xsd)
DEBUG:suds.transport.http:opening (http://www.w3.org/2001/xml.xsd)
DEBUG:suds.client:sending to (https://web.site.com/sites/Collection/_vti_bin/Lists.asmx)
message:
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope xmlns:ns0="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://schemas.microsoft.com/sharepoint/soap/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Header/>
<ns0:Body>
<ns1:GetListCollection/>
</ns0:Body>
</SOAP-ENV:Envelope>
DEBUG:suds.client:headers = {'SOAPAction': u'"http://schemas.microsoft.com/sharepoint/soap/GetListCollection"', 'Content-Type': 'text/xml; charset=utf-8'}
DEBUG:suds.transport.http:sending:
URL:https://web.site.com/sites/Collection/_vti_bin/Lists.asmx
HEADERS: {'SOAPAction': u'"http://schemas.microsoft.com/sharepoint/soap/GetListCollection"', 'Content-Type': 'text/xml; charset=utf-8', 'Content-type': 'text/xml; charset=utf-8', 'Soapaction': u'"http://schemas.microsoft.com/sharepoint/soap/GetListCollection"'}
MESSAGE:
<?xml version="1.0" encoding="UTF-8"?><SOAP-ENV:Envelope xmlns:ns0="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://schemas.microsoft.com/sharepoint/soap/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"><SOAP-ENV:Header/><ns0:Body><ns1:GetListCollection/></ns0:Body></SOAP-ENV:Envelope>
ERROR:suds.client:<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope xmlns:ns0="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://schemas.microsoft.com/sharepoint/soap/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Header/>
<ns0:Body>
<ns1:GetListCollection/>
</ns0:Body>
</SOAP-ENV:Envelope>
DEBUG:suds.client:http failed:
403 FORBIDDEN
Traceback (most recent call last):
File "C:/Users/username/PycharmProjects/Suds/soap.py", line 14, in <module>
client.service.GetListCollection()
File "C:\Python27\lib\site-packages\suds\client.py", line 542, in __call__
return client.invoke(args, kwargs)
File "C:\Python27\lib\site-packages\suds\client.py", line 602, in invoke
result = self.send(soapenv)
File "C:\Python27\lib\site-packages\suds\client.py", line 649, in send
result = self.failed(binding, e)
File "C:\Python27\lib\site-packages\suds\client.py", line 708, in failed
raise Exception((status, reason))
Exception: (403, u'Forbidden')
However, the equivalent works perfectly fine in cURL (formatted for readability)
curl -u 'Username':'Password' --ntlm -X POST \
-H 'Content-Type: text/xml'\
-H 'SOAPAction: "http://schemas.microsoft.com/sharepoint/soap/GetListCollection"' \
"https://web.site.com/sites/Collection/_vti_bin/Lists.asmx" \
--data-binary #soapenvelope.xml
I also have not been able to find a way to force suds to run through a Fiddler proxy while also passing NTLM authentication. Outlined here, it seems to ignore proxy settings anyway, and I can't find a way to mix and match the local proxy through fiddler and have it attempt NTLM authentication against the Lists web service.
Environment Information (to make this easily searchable):
SharePoint 2010
Forms Based Authentication / FedAuth
External Oracle SSO using SAML v1.x
After a good amount of reading for the NTLM protocol, I figured out that cURL was sending the NTLM Type 1 message on the first request. However, suds (and PowerShells' New-WebServiceProxy) was not doing that. It was getting the 403 Forbidden, but did not carry on the handshake process since that was unexpected. Using the create_NTLM_NEGOTIATE_MESSAGE() in python-ntlm3, I was able to generate that Type 1 message.
By adding the 'Authorization' header with the Type 1 message, that forced the 401 Unauthorized, and caused the WindowsHttpAuthenticated to complete the handshake as expected.
from ntlm3 import ntlm
from suds.client import Client
from suds.transport.https import WindowsHttpAuthenticated
url = "https://web.site.com/sites/Collection/_vti_bin/Lists.asmx"
wsdl = (url + "?WSDL")
domain = 'DOM'
username = 'USER'
password = 'PASS'
transport = WindowsHttpAuthenticated(username=username,
password=password)
client = Client(url=wsdl,
location=url,
transport=transport)
negotiate = "%s\\%s" % (domain, username)
ntlmauth = 'NTLM %s' % ntlm.create_NTLM_NEGOTIATE_MESSAGE(negotiate).decode('ascii')
header = {'Authorization': ntlmauth}
client.set_options(headers=header)
client.service.GetList('Test')
The magical reason all this is happening is because we use Forms Based Authentication (FBA). Normal authentication requests get routed to an Oracle SSO, which is why (I believe) it is not responding normally and causing the 403 Unauthorized.
Hopefully this helps keep someone from banging their head against the wall.

Querying Office 365 Service Communications API with Python

I am trying to get the name of all Office 365 services by querying the Service Communications API.
I have been able to complete the task using a PowerShell script, but am unable to do the same using Python.
When using Python, I get a 200 response code, but have been unable to parse what is returned. Any help would be much appreciated.
My attempt to convert the PowerShell script to Python is below.
import json
import requests
from requests.auth import HTTPBasicAuth
username = "username"
password = "password"
# Base Service Communications URI
baseuri = "https://api.admin.microsoftonline.com/shdtenantcommunications.svc"
headers = {"accept": "application/json;odata=verbose"}
auth = {"username": username, "password": password}
# URI Paths
serviceinfo = "/GetServiceInformation"
register = "/Register"
response = requests.options(baseuri+register, auth=HTTPBasicAuth(username, password))
print("Registration status code: %s" % response.status_code)
if (response is not None and 200 == response.status_code):
info = requests.options(baseuri+serviceinfo, auth=HTTPBasicAuth(username, password))
print("Info status code: %s" % info.status_code)
data = json.loads(info.text)
The Python script returns an error. Specifically, it returns the following:
Registration status code: 200
Info status code: 200
Traceback (most recent call last):
File "o365_option.py", line 22, in <module>
data = json.loads(info.text)
File "/usr/local/lib/python2.7/json/__init__.py", line 326, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/lib/python2.7/json/decoder.py", line 384, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
There are a few issues with your python script. Here is the correct python script to duplicate the results from the powershell script you posted.
import json
import requests
from requests.auth import HTTPBasicAuth
username = "username"
password = "password"
# Base Service Communications URI
baseuri = "https://api.admin.microsoftonline.com/shdtenantcommunications.svc"
headers = {"accept": "application/json;odata=verbose"}
auth = {"username": username, "password": password}
# URI Paths
serviceinfo = "/GetServiceInformation"
register = "/Register"
payload = {'userName': username, 'password': password}
myheaders = {'Content-Type': 'application/json'}
data=json.dumps(payload)
response = requests.post(baseuri+register,data=json.dumps(payload),headers=myheaders)
responsedata = json.loads(response.text)
cookie = responsedata.get("RegistrationCookie")
payload1 = {'lastCookie':cookie,'locale':"en-US"}
response = requests.post(baseuri+serviceinfo,data=json.dumps(payload1),headers=myheaders)
responsedata = json.loads(response.text)
for myobject in responsedata:
print myobject.get("ServiceName")
This is the response you will get:
"Exchange Online"
"Office Subscription"
"Identity Service"
"Office 365 Portal"
"Skype for Business"
"SharePoint Online"
"Rights Management Service"
"Yammer Enterprise"
"OneDrive for Business"
"Mobile Device Management"
Additionally, please note that there is a new version of the Office 365 Service Communications API in public preview which is available here:
https://msdn.microsoft.com/en-us/library/office/dn707385.aspx
It has a few new methods that might be interesting to you, and is a bit easier to develop against. The new API follows the OAuth 2.0 flow that the other Microsoft APIs are using. If you are using multiple Microsoft APIs, than you will be familiar with the flow already.
Let me know if this answers your question or if have any additional questions.

python httplib2 certificate verify failed

I have tried everything I can find to get this to work...
I'm working on a plugin for a python-based task program (called GTG). I'm running Gnome on Opensuse Linux.
Code (Python 2.7):
def initialize(self):
"""
Intialize backend: try to authenticate. If it fails, request an authorization.
"""
super(Backend, self).initialize()
path = os.path.join(CoreConfig().get_data_dir(), 'backends/gtask', 'storage_file-%s' % self.get_id())
# Try to create leading directories that path
path_dir = os.path.dirname(path)
if not os.path.isdir(path_dir):
os.makedirs(path_dir)
self.storage = Storage(path)
self.authenticate()
def authenticate(self):
""" Try to authenticate by already existing credences or request an authorization """
self.authenticated = False
credentials = self.storage.get()
if credentials is None or credentials.invalid == True:
self.request_authorization()
else:
self.apply_credentials(credentials)
# Request periodic import, avoid waiting a long time
# self.start_get_tasks()
def apply_credentials(self, credentials):
""" Finish authentication or request for an authorization by applying the credentials """
http = httplib2.Http(ca_certs = '/etc/ssl/certs/ca_certs.pem', disable_ssl_certificate_validation=True)
http = credentials.authorize(http)
# Build a service object for interacting with the API.
self.service = build_service(serviceName='tasks', version='v1', http=http, developerKey='AIzaSyAmUlk8_iv-rYDEcJ2NyeC_KVPNkrsGcqU')
# self.service = build_service(serviceName='tasks', version='v1')
self.authenticated = True
def _authorization_step2(self, code):
credentials = self.flow.step2_exchange(code)
# credential = self.flow.step2_exchange(code)
self.storage.put(credentials)
credentials.set_store(self.storage)
return credentials
def request_authorization(self):
""" Make the first step of authorization and open URL for allowing the access """
self.flow = OAuth2WebServerFlow(client_id=self.CLIENT_ID,
client_secret=self.CLIENT_SECRET,
scope='https://www.googleapis.com/auth/tasks',
redirect_uri='http://localhost:8080',
user_agent='GTG')
oauth_callback = 'oob'
auth_uri = self.flow.step1_get_authorize_url(oauth_callback)
# credentials = self.flow.step2_exchange(code)
# url = self.flow.step1_get_authorize_url(oauth_callback)
browser_thread = threading.Thread(target=lambda: webbrowser.open_new(auth_uri))
browser_thread.daemon = True
browser_thread.start()
# Request the code from user
BackendSignals().interaction_requested(self.get_id(), _(
"You need to <b>authorize GTG</b> to access your tasks on <b>Google</b>.\n"
"<b>Check your browser</b>, and follow the steps there.\n"
"When you are done, press 'Continue'."),
BackendSignals().INTERACTION_TEXT,
"on_authentication_step")
def on_authentication_step(self, step_type="", code=""):
if step_type == "get_ui_dialog_text":
return _("Code request"), _("Paste the code Google has given you"
"here")
elif step_type == "set_text":
try:
credentials = self._authorization_step2(code)
except FlowExchangeError, e:
# Show an error to user and end
self.quit(disable = True)
BackendSignals().backend_failed(self.get_id(),
BackendSignals.ERRNO_AUTHENTICATION)
return
self.apply_credentials(credentials)
# Request periodic import, avoid waiting a long time
self.start_get_tasks()
The browser window opens up and I am presented with a code from Google. The program opens a small window where I can enter the code from Google.When that happens I get this in the console :
No handlers could be found for logger "oauth2client.util"
Created new window in existing browser session.
[522:549:0108/063825:ERROR:nss_util.cc(821)] After loading Root Certs, loaded==false: NSS error code: -8018
but the SSL icon is green in Chrome...
then when I submit the code, I get :
Exception in thread Thread-10:
Traceback (most recent call last):
File "/usr/lib64/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib64/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/site-packages/GTG/backends/backend_gtask.py", line 204, in on_authentication_step
credentials = self._authorization_step2(code)
File "/usr/lib/python2.7/site-packages/GTG/backends/backend_gtask.py", line 151, in _authorization_step2
credentials = self.flow.step2_exchange(code)
File "/usr/lib/python2.7/site-packages/oauth2client/util.py", line 132, in positional_wrapper
return wrapped(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/oauth2client/client.py", line 1283, in step2_exchange
headers=headers)
File "/usr/lib/python2.7/site-packages/httplib2/__init__.py", line 1586, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/usr/lib/python2.7/site-packages/httplib2/__init__.py", line 1328, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/usr/lib/python2.7/site-packages/httplib2/__init__.py", line 1250, in _conn_request
conn.connect()
File "/usr/lib/python2.7/site-packages/httplib2/__init__.py", line 1037, in connect
raise SSLHandshakeError(e)
SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)
The file is called backend_gtask.py...
I have tried importing the certificate as stated here : How to update cacerts.txt of httplib2 for Github?
I have tried to disable verification (httplib2.Http(disable_ssl_certificate_validation=True)) as stated all over the web,
I have updated the python packages (which seemed to make things worse)
I have copied ca_certs.pem back and forth between /etc/ssl... and /usr/lib/python2.7/...
When I visit the auth page in a browser, it says the certificate is verified...
What else can I possibly check?
SHORT TEST CODE :
from oauth2client.client import OAuth2WebServerFlow
from oauth2client.tools import run
from oauth2client.file import Storage
CLIENT_ID = 'id'
CLIENT_SECRET = 'secret'
flow = OAuth2WebServerFlow(client_id=CLIENT_ID,
client_secret=CLIENT_SECRET,
scope='https://www.googleapis.com/auth/tasks',
redirect_uri='http://localhost:8080')
storage = Storage('creds.data')
credentials = run(flow, storage)
print "access_token: %s" % credentials.access_token
Found that here: https://github.com/burnash/gspread/wiki/How-to-get-OAuth-access-token-in-console%3F
OK...
Big thanks to Steffen Ullrich.
httplib2 version 0.9 tries to use the system certificates and not the certs.txt file that used to be shipped with it. It also enforces verification.
httplib2 can take a couple of useful parameters - notably ca_certs. Use it to point to the actual *.pem file in you ssl installation. I cannot be a folder, must be a real file.
I use the following in the initialization of the plugin :
self.http = httplib2.Http(ca_certs = '/etc/ssl/ca-bundle.pem')
Then, for all subsequent calls to httplib or google client libraries, I pass my pre-built http object as a parameter like this:
credentials = self.flow.step2_exchange(code, self.http)
self.http = credentials.authorize(self.http)
Now ssl connections work with the new httplib2...
I will eventually have to make sure the plugin can find certificates on any system, but at least I know what the problem was.
Thanks again to Steffen Ullrich for walking me through this.
See this answer for an easier fix without touching your code: just set your certificate bundle pem file path in an environment variable:
export HTTPLIB2_CA_CERTS="\path\to\your\ca-bundle"

Create and parse multipart HTTP requests in Python

I'm trying to write some python code which can create multipart mime http requests in the client, and then appropriately interpret then on the server. I have, I think, partially succeeded on the client end with this:
from email.mime.multipart import MIMEMultipart, MIMEBase
import httplib
h1 = httplib.HTTPConnection('localhost:8080')
msg = MIMEMultipart()
fp = open('myfile.zip', 'rb')
base = MIMEBase("application", "octet-stream")
base.set_payload(fp.read())
msg.attach(base)
h1.request("POST", "http://localhost:8080/server", msg.as_string())
The only problem with this is that the email library also includes the Content-Type and MIME-Version headers, and I'm not sure how they're going to be related to the HTTP headers included by httplib:
Content-Type: multipart/mixed; boundary="===============2050792481=="
MIME-Version: 1.0
--===============2050792481==
Content-Type: application/octet-stream
MIME-Version: 1.0
This may be the reason that when this request is received by my web.py application, I just get an error message. The web.py POST handler:
class MultipartServer:
def POST(self, collection):
print web.input()
Throws this error:
Traceback (most recent call last):
File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/application.py", line 242, in process
return self.handle()
File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/application.py", line 233, in handle
return self._delegate(fn, self.fvars, args)
File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/application.py", line 415, in _delegate
return handle_class(cls)
File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/application.py", line 390, in handle_class
return tocall(*args)
File "/home/richard/Development/server/webservice.py", line 31, in POST
print web.input()
File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/webapi.py", line 279, in input
return storify(out, *requireds, **defaults)
File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/utils.py", line 150, in storify
value = getvalue(value)
File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/utils.py", line 139, in getvalue
return unicodify(x)
File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/utils.py", line 130, in unicodify
if _unicode and isinstance(s, str): return safeunicode(s)
File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/utils.py", line 326, in safeunicode
return obj.decode(encoding)
File "/usr/lib/python2.6/encodings/utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode bytes in position 137-138: invalid data
My line of code is represented by the error line about half way down:
File "/home/richard/Development/server/webservice.py", line 31, in POST
print web.input()
It's coming along, but I'm not sure where to go from here. Is this a problem with my client code, or a limitation of web.py (perhaps it just can't support multipart requests)? Any hints or suggestions of alternative code libraries would be gratefully received.
EDIT
The error above was caused by the data not being automatically base64 encoded. Adding
encoders.encode_base64(base)
Gets rid of this error, and now the problem is clear. HTTP request isn't being interpreted correctly in the server, presumably because the email library is including what should be the HTTP headers in the body instead:
<Storage {'Content-Type: multipart/mixed': u'',
' boundary': u'"===============1342637378=="\n'
'MIME-Version: 1.0\n\n--===============1342637378==\n'
'Content-Type: application/octet-stream\n'
'MIME-Version: 1.0\n'
'Content-Transfer-Encoding: base64\n'
'\n0fINCs PBk1jAAAAAAAAA.... etc
So something is not right there.
Thanks
Richard
I used this package by Will Holcomb http://pypi.python.org/pypi/MultipartPostHandler/0.1.0 to make multi-part requests with urllib2, it may help you out.
After a bit of exploration, the answer to this question has become clear. The short answer is that although the Content-Disposition is optional in a Mime-encoded message, web.py requires it for each mime-part in order to correctly parse out the HTTP request.
Contrary to other comments on this question, the difference between HTTP and Email is irrelevant, as they are simply transport mechanisms for the Mime message and nothing more. Multipart/related (not multipart/form-data) messages are common in content exchanging webservices, which is the use case here. The code snippets provided are accurate, though, and led me to a slightly briefer solution to the problem.
# open an HTTP connection
h1 = httplib.HTTPConnection('localhost:8080')
# create a mime multipart message of type multipart/related
msg = MIMEMultipart("related")
# create a mime-part containing a zip file, with a Content-Disposition header
# on the section
fp = open('file.zip', 'rb')
base = MIMEBase("application", "zip")
base['Content-Disposition'] = 'file; name="package"; filename="file.zip"'
base.set_payload(fp.read())
encoders.encode_base64(base)
msg.attach(base)
# Here's a rubbish bit: chomp through the header rows, until hitting a newline on
# its own, and read each string on the way as an HTTP header, and reading the rest
# of the message into a new variable
header_mode = True
headers = {}
body = []
for line in msg.as_string().splitlines(True):
if line == "\n" and header_mode == True:
header_mode = False
if header_mode:
(key, value) = line.split(":", 1)
headers[key.strip()] = value.strip()
else:
body.append(line)
body = "".join(body)
# do the request, with the separated headers and body
h1.request("POST", "http://localhost:8080/server", body, headers)
This is picked up perfectly well by web.py, so it's clear that email.mime.multipart is suitable for creating Mime messages to be transported by HTTP, with the exception of its header handling.
My other overall conern is in scalability. Neither this solution nor the others proposed here scale well, as they read the contents of a file into a variable before bundling up in the mime message. A better solution would be one which could serialise on demand as the content is piped out over the HTTP connection. It's not urgent for me to fix that, but I'll come back here with a solution if I get to it.
There is a number of things wrong with your request. As TokenMacGuy suggests, multipart/mixed is unused in HTTP; use multipart/form-data instead. In addition, parts should have a Content-disposition header. A python fragment to do that can be found in the Code Recipes.

Categories