setting and reading cookies in mod_python (apache) - python

I've seen plenty of things explaining how to read and write cookies, however, I have no clue about how to do it in mod_python in apache. I tried putting this at the start of my HTML code, but it says to put it in the HTTP header. How do I do that?
Also, how do I retrieve them? I was originally looking mainly at this site:
http://webpython.codepoint.net/cgi_set_the_cookie
My code currently starts like this (and it displays as part of the HTML)
Content-Type: text/html
Set-Cookie: test=1
<html>
<head>

mod_python is not CGI, and provides it's own way to set and read cookies:
from mod_python import Cookie, apache
import time
def handler(req):
# read a cookie
spam_cookie = get_cookie(req, 'spam')
# set a cookie
egg_cookie = Cookie.Cookie('eggs', 'spam')
egg_cookie.expires = time.time() + 300
Cookie.add_cookie(req, egg_cookie)
req.write('<html><head></head><body>There's a cookie</body></html>')
return apache.OK
You'll find more documentation here : http://www.modpython.org/live/current/doc-html/pyapi-cookie.html

Related

Python urllib - unable to authenticate automatically and receiving 302 error on NASA server

I use NASA GSFC server to retrieve data from their archives.
I send request and receives response as a simple text.
I discovered that they amended their page so that login is required.
However, even after logging I'm receiving an error.
I read information provided in thread how do python capture 302 redirect url
as well as tried to use urllib2 and request libraries, but still receiving an error.
Currently part of my code responsible for downloading data looks as follows:
def getSampleData():
import urllib
# I approved application according to:
# http://disc.sci.gsfc.nasa.gov/registration/authorizing-gesdisc-data-access-in-earthdata_login
# Query: http://hydro1.sci.gsfc.nasa.gov/dods/_expr_{GLDAS_NOAH025SUBP_3H}{ave(rainf,time=00Z23Oct2016,time=00Z24Oct2016)}{17.00:25.25,48.75:54.50,1:1,00Z23Oct2016:00Z23Oct2016}.ascii?result
sample_query = 'http://hydro1.sci.gsfc.nasa.gov/dods/_expr_%7BGLDAS_NOAH025SUBP_3H%7D%7Bave(rainf,time=00Z23Oct2016,time=00Z24Oct2016)%7D%7B17.00:25.25,48.75:54.50,1:1,00Z23Oct2016:00Z23Oct2016%7D.ascii?result'
# I've tried also:
# sock=urllib.urlopen(sample_query, urllib.urlencode({'username':'MyUserName','password':'MyPassword'}))
# but I was still asked to provide credentials, so I simplified mentioned line to just:
sock=urllib.urlopen(sample_query)
print('\n\nCurrent url:\n')
print(sock.geturl())
print('\nIs it the same as sample query?')
print(sock.geturl() == sample_query)
returnedData=sock.read()
# returnedData always stores simple page with 302. Why? StackOverflow suggests that urllib and urllib2 handle redirection automatically
sock.close()
with open("Output.html", "w") as text_file:
text_file.write(returnedData)
Output.html content is as follows:
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>302 Found</title>
</head><body>
<h1>Found</h1>
<p>The document has moved here.</p>
</body></html>
If I copy-paste sample_query (sample_query from function defined above) to browser, I have no problem with receiving data.
Thus, if there's no hope for solution, I'm thinking about rewriting my code to use Selenium.
It seems that i figured out how to download data:
How to authenticate on NASA gsfc server
However, I don't know how to process dataset.
I would like to display (or write to textfile) output as a raw data (in exactly the same way as I see them in browser)

python-requests post with unicode filenames

I've read through several related questions here on SO but didn't manage to find a working solution.
I have a Flask server with this simplified code:
app = Flask(__name__)
api = Api(app)
class SendMailAPI(Resource):
def post(self):
print request.files
return Response(status=200)
api.add_resource(SendMailAPI, '/')
if __name__ == '__main__':
app.run(host='0.0.0.0', debug=True)
Then in the client:
# coding:utf-8
import requests
eng_file_name = 'a.txt'
heb_file_name = u'א.txt'
requests.post('http://localhost:5000/', files={'file0': open(eng_file_name, 'rb')})
requests.post('http://localhost:5000/', files={'file0': open(heb_file_name, 'rb')})
When sending the first request with the non-utf-8 filename the server receives the request with the file and prints ImmutableMultiDict([('file0', <FileStorage: u'a.txt' (None)>)]), but when sending the file with the utf-8 filename the server doesn't seem to receive the file as it prints ImmutableMultiDict([]).
I'm using requests 2.3.0 but the problem doesn't resolve with the latest version as well (2.8.1), Flask version is 0.10.1 and Flask-RESTful version is 0.3.4.
I've done some digging in requests code and the request seems to be sent ok (ie with the file), and I printed the request right before it is being sent and see the file name was indeed encoded to RFC2231:
--6ea257530b254861b71626f10a801726
Content-Disposition: form-data; name="file0"; filename*=utf-8''%D7%90.txt
To sum things up, I'm not entirely sure if the problem lies within requests that doesn't properly attach the file to the request or if Flask is having issues with picking up files with file names that are encoded according to RFC2231.
UPDATE: Came across this issue in requests GitHub: https://github.com/kennethreitz/requests/issues/2505
I think maybe there's confusion here on encoding here -
eng_file_name = 'a.txt' # ASCII encoded, by default in Python 2
heb_file_name = u'א.txt' # NOT UTF-8 Encoded - just a unicode object
To send the second one to the server what you want to do is this:
requests.post('http://localhost:5000/', files={'file0': open(heb_file_name.encode('utf-8'), 'rb')})
I'm a little surprised that it doesn't throw an error on the client trying to open the file though - you see nothing on the client end indicating an error?
EDIT: An easy way to confirm or deny my idea is of course to print out the contents from inside the client to ensure it's being read properly.
I workaround this issue by manually reading the file with read() and then posting its contents:
requests.post(upload_url, files={
'file': ("photo.jpg", open(path_with_unicode_filename, 'rb').read())
})
Try this workaround:
filename.encode("utf-8").decode("iso-8859-1").
Example:
requests.post("https://example.com", files={"file":
("中文filename.txt".encode("utf-8").decode("iso-8859-1"), fobj, mimetype)})
I post this because this is my first result when searching python requests post filename encoding.
There are lots of RFC standards about Content-Disposition encoding.
And it seems that different programs implement this part differently.
See stackoverflow: lots of RFCs and application tests, RFC 2231 - 4, email.utils.encode_rfc2231.
Java version answer here.

sending http requests with specific/non-existent http version protocol in Python

There is some way to send http requests in python with specific http version protocol.I think that, with httplib or urllib, it is not possible.
For example: GET / HTTP/6.9
Thanks in advance.
The simple answer to your question is: You're right, neither httplib nor urllib has public, built-in functionality to do this. (Also, you really shouldn't be using urllib for most things—in particular, for urlopen.)
Of course you can always rely on implementation details of those modules, as in Lukas Graf's answer.
Or, alternatively, you could fork one of those modules and modify it, which guarantees that your code will work on other Python 2.x implementations.*. Note that httplib is one of those modules that has a link to the source up at the top, which means it's mean to server as example code, not just as a black-box library.
Or you could just reimplement the lowest-level function that needs to be hooked but that's publicly documented. For httplib, I believe that's httplib.HTTPConnection.putrequest, which is a few hundred lines long.
Or you could pick a different library that has more hooks in it, so you have less to hook.
But really, if you're trying to craft a custom request to manually fingerprint the results, why are you using an HTTP library at all? Why not just do this?
msg = 'GET / HTTP/6.9\r\n\r\n'
s = socket.create_connection((host, 80))
with closing(s):
s.send(msg)
buf = ''.join(iter(partial(s.recv, 4096), ''))
* That's not much of a benefit, given that there will never be a 2.8, all of the existing major 2.7 implementations share the same source for this module, and it's not likely any new 2.x implementation will be any different. And if you go to 3.x, httplib has been reorganized and renamed, while urllib has been removed entirely, so you'll already have bigger changes to worry about.
You can do it easily enough by subclassing httplib.HTTPConnection and redefining the class attribute _http_vsn_str:
from httplib import HTTPConnection
class MyHTTPConnection(HTTPConnection):
_http_vsn_str = '6.9'
conn = MyHTTPConnection("www.stackoverflow.com")
conn.request("GET", "/")
response = conn.getresponse()
print "Status: {} {}".format(response.status, response.reason)
print "Headers: {}".format(response.getheaders())
print "Body: {}".format(response.read())
Of course this will result in a 400 Bad Request for most servers:
Status: 400 Bad Request
Headers: [('date', 'Tue, 11 Nov 2014 21:21:12 GMT'), ('connection', 'close'), ('content-type', 'text/html; charset=us-ascii'), ('content-length', '311')]
Body: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
<HTML><HEAD><TITLE>Bad Request</TITLE>
<META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
<BODY><h2>Bad Request</h2>
<hr><p>HTTP Error 400. The request is badly formed.</p>
</BODY></HTML>
this is possible using pycurl by using this option
c.setopt(pycurl.HTTP_VERSION, pycurl.CURL_HTTP_VERSION_1_0)
however you need to use linux or mac since pycurl is not officially supported on windows

bad request when using python with suds for sharepoint

I am using Suds to access Sharepoint lists through soap, but I am having some trouble with malformed soap.
I am using the following code:
from suds.client import Client
from suds.sax.element import Element
from suds.sax.attribute import Attribute
from suds.transport.https import WindowsHttpAuthenticated
import logging
logging.basicConfig(level=logging.INFO)
logging.getLogger('suds.client').setLevel(logging.DEBUG)
ntlm = WindowsHttpAuthenticated(username='somedomain\\username', password='password')
url = "http://somedomain/sites/somesite/someothersite/somethirdsite/_vti_bin/Lists.asmx?WSDL"
client = Client(url, transport=ntlm)
result = client.service.GetListCollection()
print repr(result)
Every time I run this, I get the result Error 400 Bad request. As I have debugging enabled I can see the resulting envelope:
<SOAP-ENV:Envelope xmlns:ns0="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://schemas.microsoft.com/sharepoint/soap/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Header/>
<ns0:Body>
<ns1:GetListCollection/>
</ns0:Body>
</SOAP-ENV:Envelope>
...with this error message:
DEBUG:suds.client:http failed:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
<HTML><HEAD><TITLE>Bad Request</TITLE>
<META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
<BODY><h2>Bad Request</h2>
<hr><p>HTTP Error 400. The request is badly formed.</p>
</BODY></HTML>
Running the same WSDL (and raw envelope data as well) through SoapUI the request returns with values as expected. Can anyone see any obvious reason why I get the different results with Suds as SoapUI and how I can correct this?
UPDATE: after testing the exact same code on a different Sharepoint site (i.e. not a subsubsubsite with whitespace in its name) and with Java (JAX-WS, which also had issues with the same site, though, different issues) it appears as if it works as expected. As a result I wonder if one of two details may be the reason for these problems:
SOAP implementations have some issues with subsubsubsites in Sharepoint?
SOAP implementations have some issues with whitespace in its name, even if using %20 as a replacement?
I still have the need to use the original URL with those issues, so any input would be highly appreciated. I assume that since SoapUI worked with the original url, it should be possible to correct whatever is wrong.
I think I narrowed down the issue, and it is specific to suds (possibly other SOAP implementations as well). Your bullet point:
SOAP implementations have some issues with whitespace in its name, even if using %20 as a replacement?
That's spot on. Turning up debug logging for suds allowed me to grab the endpoint, envelope, and headers. Mimicking the exact same call using cURL returns a valid response, but suds it throws the bad request.
The issue is that suds takes your WSDL (url parameter) and parses it, but it doesn't include the URL encoded string. This leads to debug messages like this:
DEBUG:suds.transport.http:opening (https://sub.site.com/sites/Site Collection with Spaces/_vti_bin/UserGroup.asmx?WSDL)
<snip>
TransportError: HTTP Error 400: Bad Request
Piping this request through a fiddler proxy showed that it was running the request against the URL https://sub.site.com/sites/Site due to the way it parses the WSDL. The issue is that you aren't passing the location parameter to suds.client.Client. The following code gives me valid responses every time:
from ntlm3 import ntlm
from suds.client import Client
from suds.transport.https import WindowsHttpAuthenticated
# URL without ?WSDL
url = 'https://sub.site.com/sites/Site%20Collection%20with%20Spaces/_vti_bin/Lists.asmx'
# Create NTLM transport handler
transport = WindowsHttpAuthenticated(username='foo',
password='bar')
# We use FBA, so this forces it to challenge us with
# a 401 so WindowsHttpAuthenticated can take over.
msg = ("%s\\%s" % ('DOM', 'foo'))
auth = 'NTLM %s' % ntlm.create_NTLM_NEGOTIATE_MESSAGE(msg).decode('ascii')
# Create the client and append ?WSDL to the URL.
client = Client(url=(url + "?WSDL"),
location=url,
transport=transport)
# Add the NTLM header to force negotiation.
header = {'Authorization': auth}
client.set_options(headers=header)
One caveat: Using quote from urllib works, but you cannot encode the entire URL or it fails to recognize the URL. You are better off just doing a replace on spaces with %20.
url = url.replace(' ','%20')
Hope this keeps someone else from banging their head against the wall.

Python httplib POST request and proper formatting

I'm currently working on a automated way to interface with a database website that has RESTful webservices installed. I am having issues with figure out the proper formatting of how to properly send the requests listed in the following site using python.
https://neesws.neeshub.org:9443/nees.html
Particular example is this:
POST https://neesws.neeshub.org:9443/REST/Project/731/Experiment/1706/Organization
<Organization id="167"/>
The biggest problem is that I do not know where to put the XML formatted part of the above. I want to send the above as a python HTTPS request and so far I've been trying something of the following structure.
>>>import httplib
>>>conn = httplib.HTTPSConnection("neesws.neeshub.org:9443")
>>>conn.request("POST", "/REST/Project/731/Experiment/1706/Organization")
>>>conn.send('<Organization id="167"/>')
But this appears to be completely wrong. I've never actually done python when it comes to webservices interfaces so my primary question is how exactly am I supposed to use httplib to send the POST Request, particularly the XML formatted part of it? Any help is appreciated.
You need to set some request headers before sending data. For example, content-type to 'text/xml'. Checkout the few examples,
Post-XML-Python-1
Which has this code as example:
import sys, httplib
HOST = www.example.com
API_URL = /your/api/url
def do_request(xml_location):
"""HTTP XML Post requeste"""
request = open(xml_location,"r").read()
webservice = httplib.HTTP(HOST)
webservice.putrequest("POST", API_URL)
webservice.putheader("Host", HOST)
webservice.putheader("User-Agent","Python post")
webservice.putheader("Content-type", "text/xml; charset=\"UTF-8\"")
webservice.putheader("Content-length", "%d" % len(request))
webservice.endheaders()
webservice.send(request)
statuscode, statusmessage, header = webservice.getreply()
result = webservice.getfile().read()
print statuscode, statusmessage, header
print result
do_request("myfile.xml")
Post-XML-Python-2
You may get some idea.

Categories