I'm trying to upload a PDF file to a website using Hot Banana's content management system using a Python script. I've successfully logged into the site and can log out, but I can't seem to get file uploads to work.
The file upload is part of a large complicated web form that submits the form data and PDF file though a POST. Using Firefox along with the Firebug and Tamper Data extensions I took a peek at what the browser was sending in the POST and where it was going. I believe I mimicked the data the browser was sending in the code, but I'm still having trouble.
I'm importing cookielib to handle cookies, poster to encode the PDF, and urllib and urllib2 to build the request and send it to the URL.
Is it possible that registering the poster openers is clobbering the cookie processor openers? Am I doing this completely wrong?
Edit: What's a good way to debug the process? At the moment, I'm just dumping out the urllib2 response to a text file and examining the output to see if it matches what I get when I do a file upload manually.
Edit 2: Chris Lively suggested I post the error I'm getting. The response from urllib2 doesn't generate an exception, but just returns:
<script>
if (parent != window) {
parent.document.location.reload();
} else {
parent.document.location = 'login.cfm';
}
</script>
I'll keep at it.
A tool like WireShark will give you a more complete trace at a much lower-level than the firefox plugins.
Often this can be something as simple as not setting the content-type correctly, or failing to include content-length.
"What's a good way to debug [a web services] process?"
At the moment, I'm just dumping out the urllib2 response to a text file and examining the output to see if it matches what I get when I do a file upload manually.
Correct. That's about all there is.
HTTP is a very simple protocol -- you make a request (POST, in this case) and the server responds. Not much else involved and not much more you can do while debugging.
What else would you like? Seriously. What kind of debugger are you imagining might exist for this kind of stateless protocol?
You might be better off instrumenting the server to see why this is failing, rather than trying to debug this on the client side.
Related
I'm developing my Django website since about 2 months and I begin to get a good global result with my own functions.
But, now I have to start a very hard part (to my mind) and I need some advices, ideas before to do that.
My Django website creates some PDF files from HTML templates with Django variables. Up to now, I'm saving PDF files directly on my Desktop (in a specific folder) but it's completely unsecured.
So, I installed another web application which is named LogicalDoc in order to save PDF file directly on this application. PDF files are created and sent to LogicalDoc.
LogicalDoc owns 2 API : SOAP and REST (http://wiki.logicaldoc.com/rest/#/) and I know that Django could communicate with REST method.
I'm reading this part of Django documentation too in order to understand How I can process : https://docs.djangoproject.com/en/dev/topics/http/file-uploads/
I made a scheme in order to understand what I'm exposing :
Then, I write a script which makes some things :
When the PDF file is created, I create a folder inside LogicalDoc which takes for example the following name : lastname_firstname_birthday
Two possibilities : If the folder exists,I don't create a new folder, else I create it.
Once it's done, I send the PDF file directly inside the folder by comparing PDF name with folder name to do that
I have some questions about this process :
Firstly, is it possible to make this kind of things ?
Is it hard to do that ?
What kind of advices could you give me ?
Thank you so much !
PS : If you need some part of my script, mainly PDF creating part, I can post it just after my question ;)
An idea is pretty simple, however it always requires some practice.
I strongly advice you to use REST api and forget about SOAP as the only thing it can bring to you - is 'pain' :)
If we check documentation, document/create it gives next information.
Endpoint we have to communicate with.
[protocol]://[server]:[port]/document/create
HTTP method to use - POST
List of parameters to provide with your request: body,
document, content
Even more, you can test API by clicking on "Try it out" button and check requests in "Network" tab of your browser (if you open Developer Tools)
I am not sure what kind of metadata do you have to provide in 'document' parameter but what I know you can easy get an idea of what should be done by testing it and putting XML or JSON data into 'document' parameter.
Content is an array of bytes transferred to the server (which would be your file).
To sum up, a request to 'document/create' uri will be simple
body = { 'headers': {},'object': {},}
document = "<note>data</note>"
content=open('report.xls', 'rb') #r - reading, b - binary
r = requests.post('http://logicaldoc/document/create', body=body, document=document, content=content)
Please keep in mind that file transferring requests take time and sometimes you may get timeout exception. Your code will stop and will be waiting for response, so it may be a good idea to get some practice with asyncio or celery. Just keep in mind those kind of possible issues.
I am trying to retrieve query results on sites based on ajax like www.snapbird.org using Python. Since it doesn't show in the page source, I am not sure how to proceed.
I am a Python newbie and hence it would be great if I could get a pointer in the right direction.
I am also open to some other approach to the task if that is easier
This is going to be complex but as a start, ppen firebug and find the URL that gets called when the AJAX request is handled. You can call that directly in your Python program and parse the output.
You could use Selenium's Python client driver to parse the page source. I usually use this in conjunction with PyQuery to make web scraping easier.
Here's the basic tutorial for Selenium's Python driver. Be sure to follow the instructions for Selenium version 2 instead of version 1 (unless you're using version 1 for some reason).
You could also configure chrome/firefox to an HTTP proxy and then log/extract the necessary content with the proxy. I've tinkered with python proxies to save/log the requests/content based on content-type or URI globs.
For other projects I've written site-specific javascript bookmarklets which poll for new data and then POST it to my server (by dynamically creating both a form and iframe, and setting myform.target=myiframe;
Other javascript scripts/bookmarklets simulate a user interacting with sites, so instead of polling every few seconds the javascript automates clicking buttons and form submissions, etc. These scripts are always very site-specific of course but they've been hugely useful for me, especially when iterating over all the paginated results for a given search.
Here is a stripped down version of walking over a list of "paginated" results and preparing to send the data off to my server (which then further parses it with BeautifulSoup). In particular this was designed for Youtube's Sent/Inbox messages.
var tables = [];
function process_and_repeat(){
if(!(inbox && inbox.message_pane_ && inbox.message_pane_.innerHTML)){
alert("We've got no data!");
}
if(inbox.message_pane_.innerHTML.indexOf('<table') === 0)
{
tables.push(inbox.message_pane_.innerHTML);
inbox.next_page();
setTimeout("process_and_repeat()",3000);
}
else{
alert("Fininshed, [" + tables.length + " processed]");
document.write('<form action=http://curl.sente.cc method=POST><textarea name=sent.html>'+escape(tables.join('\n'))+'</textarea><input type=submit></form>')
}
}
process_and_repeat(); // now we wait and watch as all the paginated pages are viewed :)
This is a stripped down example without any fancy iframes/non-essentials which just add complexity.
Adding to what Liam said, Selenium is a great tool, too, which has aided in my various scraping needs. I'd be more than happy to help you out with this if you'd like.
One easy solution might be using a browser like Mechanize. So you can browse site, follow links, make searches and nearly everything that you can do with a browser with user interface.
But for a very sepcific job, you may not even need a such library, you can use urllib and urllib2 python libraries to make a connection and read response... You can use Firebug to see data structure of a search and response body. Then use urllib to make a request with relevant parameters...
With an example...
I made a search with joyvalencia and check the request url with firebug to see:
http://api.twitter.com/1/statuses/user_timeline.json?screen_name=joyvalencia&count=100&page=2&include_rts=true&callback=twitterlib1321017083330
So calling this url with urllib2.urlopen() will be the same with making the query on Snapbird. Response body is:
twitterlib1321017083330([{"id_str":"131548107799396357","place":null,"geo":null,"in_reply_to_user_id_str":null,"coordinates":.......
When you use urlopen() and read the response, the upper string is what you get... Then you can use json library of python to read the data and parse it to a pythonic data structure...
I'm trying to read in info that is constantly changing from a website.
For example, say I wanted to read in the artist name that is playing on an online radio site.
I can grab the current artist's name but when the song changes, the HTML updates itself and I've already opened the file via:
f = urllib.urlopen("SITE")
So I can't see the updated artist name for the new song.
Can I keep closing and opening the URL in a while(1) loop to get the updated HTML code or is there a better way to do this? Thanks!
You'll have to periodically re-download the website. Don't do it constantly because that will be too hard on the server.
This is because HTTP, by nature, is not a streaming protocol. Once you connect to the server, it expects you to throw an HTTP request at it, then it will throw an HTTP response back at you containing the page. If your initial request is keep-alive (default as of HTTP/1.1,) you can throw the same request again and get the page up to date.
What I'd recommend? Depending on your needs, get the page every n seconds, get the data you need. If the site provides an API, you can possibly capitalize on that. Also, if it's your own site, you might be able to implement comet-style Ajax over HTTP and get a true stream.
Also note if it's someone else's page, it's possible the site uses Ajax via Javascript to make it up to date; this means there's other requests causing the update and you may need to dissect the website to figure out what requests you need to make to get the data.
If you use urllib2 you can read the headers when you make the request. If the server sends back a "304 Not Modified" in the headers then the content hasn't changed.
Yes, this is correct approach. To get changes in web, you have to send new query each time. Live AJAX sites do exactly same internally.
Some sites provide additional API, including long polling. Look for documentation on the site or ask their developers whether there is some.
I wrote a web crawler in Python 2.6 using the Bing API that searches for certain documents and then downloads them for classification later. I've been using string methods and urllib.urlretrieve() to download results whose URL ends in .pdf, .ps etc., but I run into trouble when the document is 'hidden' behind a URL like:
http://www.oecd.org/officialdocuments/displaydocument/?cote=STD/CSTAT/WPNA(2008)25&docLanguage=En
So, two questions. Is there a way in general to tell if a URL has a pdf/doc etc. file that it's linking to if it's not doing so explicitly (e.g. www.domain.com/file.pdf)? Is there a way to get Python to snag that file?
Edit:
Thanks for replies, several of which suggest downloading the file to see if it's of the correct type. Only problem is... I don't know how to do that (see question #2, above). urlretrieve(<above url>) gives only an html file with an href containing that same url.
There's no way to tell from the URL what it's going to give you. Even if it ends in .pdf it could still give you HTML or anything it likes.
You could do a HEAD request and look at the content-type, which, if the server isn't lying to you, will tell you if it's a PDF.
Alternatively you can download it and then work out whether what you got is a PDF.
In this case, what you refer to as "a document that's not explicitly referenced in a URL" seems to be what is known as a "redirect". Basically, the server tells you that you have to get the document at another URL. Normally, python's urllib will automatically follow these redirects, so that you end up with the right file. (and - as others have already mentioned - you can check the response's mime-type header to see if it's a pdf).
However, the server in question is doing something strange here. You request the url, and it redirects you to another url. You request the other url, and it redirects you again... to the same url! And again... And again... At some point, urllib decides that this is enough already, and will stop following the redirect, to avoid getting caught in an endless loop.
So how come you are able to get the pdf when you use your browser? Because apparently, the server will only serve the pdf if you have cookies enabled. (why? you have to ask the people responsible for the server...) If you don't have the cookie, it will just keep redirecting you forever.
(check the urllib2 and cookielib modules to get support for cookies, this tutorial might help)
At least, that is what I think is causing the problem. I haven't actually tried doing it with cookies yet. It could also be that the server is does not "want" to serve the pdf, because it detects you are not using a "normal" browser (in which case you would probably need to fiddle with the User-Agent header), but it would be a strange way of doing that. So my guess is that it is somewhere using a "session cookie", and in the case you haven't got one yet, keeps on trying to redirect.
As has been said there is no way to tell content type from URL. But if you don't mind getting the headers for every URL you can do this:
obj = urllib.urlopen(URL)
headers = obj.info()
if headers['Content-Type'].find('pdf') != -1:
# we have pdf file, download whole
...
This way you won't have to download each URL just it's headers. It's still not exactly saving network traffic, but you won't get better than that.
Also you should use mime-types instead of my crude find('pdf').
No. It is impossible to tell what kind of resource is referenced by a URL just by looking at it. It is totally up to the server to decide what he gives you when you request a certain URL.
Check the mimetype with the urllib.info() function. This might not be 100% accurate, it really depends on what the site returns as a Content-Type header. If it's well behaved it'll return the proper mime type.
A PDF should return application/pdf, but that may not be the case.
Otherwise you might just have to download it and try it.
You can't see it from the url directly. You could try to only download the header of the HTTP response and look for the Content-Type header. However, you have to trust the server on this - it could respond with a wrong Content-Type header not matching the data provided in the body.
Detect the file type in Python 3.x and webapp with url to the file which couldn't have an extension or a fake extension. You should install python-magic, using
pip3 install python-magic
For Mac OS X, you should also install libmagic using
brew install libmagic
Code snippet
import urllib
import magic
from urllib.request import urlopen
url = "http://...url to the file ..."
request = urllib.request.Request(url)
response = urlopen(request)
mime_type = magic.from_buffer(response.read())
print(mime_type)
I am basically trying to export a configuration file, once a week. While the product in question allows you to log in manually via a web client, enter some information, and get an XML file back when you submit, there's no facility for automating this. I can get away with using Python 2.5 (have used for a while) or 2.6 (unfamiliar) to do this.
I think I need to have some way to authenticate against the product. Although I can view the cookie in Firefox, when I looked through the actual cookie.txt file, it was not present. Didn't show up after clearing my private data and re-authenticating. Odd. Should I be shooting for the Cookie module, or could this be some arcane method of authentication that only looks like cookies? How would I know?
I think I need the httplib module to do the HTTP POST, but I don't know how to do the multipart/form-data encoding. Have I overlooked a handy option, or is this something where I must roll my own?
I assume I can get the XML file back in the HTTPesponse from httplib.
I've fetched web things via Python before, but not with POST, multipart encoding, and authentication in the mix.
urllib2 should cover all of this.
Here's a Basic Authentication example.
Here's a Post with multipart/form-data.
Try mechanize module.
You should look at the MultipartPostHandler:
http://odin.himinbi.org/MultipartPostHandler.py
And if you need to support unicode file names , see a fix at:
http://peerit.blogspot.com/2007/07/multipartposthandler-doesnt-work-for.html