I have a program where I need to open many webpages and download information in them. The information, however, is in the middle of the page, and it takes a long time to get to it. Is there a way to have urllib only retrieve x lines? Or, if nothing else, don't load the information afterwards?
I'm using Python 2.7.1 on Mac OS 10.8.2.
The returned object is a file-like object, and you can use .readline() to only read a partial response:
resp = urllib.urlopen(url)
for i in range(10):
line = resp.readline()
would read only 10 lines, for example. Note that this won't guarantee a faster response.
Related
I can't get URL
base_url = "http://status.aws.amazon.com/"
socket.setdefaulttimeout(30)
htmldata = urllib2.urlopen(base_url)
for url in parser.url_list:
get_rss_th = threading.Thread(target=parser.get_rss,name="get_rss_th", args=(url,))
get_rss_th.start()
print htmldata
<addinfourl at 140176301032584 whose fp = <socket._fileobject object at 0x7f7d56a09750>>
when specifying htmldata.read() (Python error when using urllib.open)
then getting blank screen
python 2.7
whole code:https://github.com/tech-sketch/zabbix_aws_template/blob/master/scripts/AWS_Service_Health_Dashboard.py
The problem is, that from URL link (RSS feed), i can't get output (data) variable data = zbx_client.recv(4096) is empty- no status
There's no real problem with your code (except for a bunch of indentation errors and syntax errors that apparently aren't in your real code), only with your attempts to debug it.
First, you did this:
print htmldata
That's perfectly fine, but since htmldata is a urllib2 response object, printing it just prints that response object. Which apparently looks like this:
<addinfourl at 140176301032584 whose fp = <socket._fileobject object at 0x7f7d56a09750>>
That doesn't look like particularly useful information, but that's the kind of output you get when you print something that's only really useful for debugging purposes. It tells you what type of object it is, some unique identifier for it, and the key members (in this case, the socket fileobject wrapped up by the response).
Then you apparently tried this:
print htmldata.read()
But already called read on the same object earlier:
parser.feed(htmldata.read())
When you read() the same file-like object twice, the first time gets everything in the file, and the second time gets everything after everything in the fileāthat is, nothing.
What you want to do is read() the contents once, into a string, and then you can reuse that string as many times as you want:
contents = htmldata.read()
parser.feed(contents)
print contents
It's also worth noting that, as the urllib2 documentation said right at the top:
See also The Requests package is recommended for a higher-level HTTP client interface.
Using urllib2 can be a big pain, in a lot of ways, and this is just one of the more minor ones. Occasionally you can't use requests because you have to dig deep under the covers of HTTP, or handle some protocol it doesn't understand, or you just can't install third-party libraries, so urllib2 (or urllib.request, as it's renamed in Python 3.x) is still there. But when you don't have to use it, it's better not to. Even Python itself, in the ensurepip bootstrapper, uses requests instead of urllib2.
With requests, the normal way to access the contents of a response is with the content (for binary) or text (for Unicode text) properties. You don't have to worry about when to read(); it does it automatically for you, and lets you access the text over and over. So, you can just do this:
import requests
base_url = "http://status.aws.amazon.com/"
response = requests.get(base_url, timeout=30)
parser.feed(response.content) # assuming it wants bytes, not unicode
print response.text
If I use this code:
import urllib2
import socket
base_url = "http://status.aws.amazon.com/"
socket.setdefaulttimeout(30)
htmldata = urllib2.urlopen(base_url)
print(htmldata.read())
I get the page's HTML code.
I'm writing a python script to parse jenkins job results. I'm using urllib2 to fetch consoleText, but the file that I receive isn't full. The code to fetch the file is:
data = urllib2.urlopen('http://<server>/job/<jobname>/<buildid>/consoleText')
lines = data.readlines()
And the number of lines I get is 2306, while the actual number of lines in the console log is 37521. I can check that buy fetching the file via wget:
$ wget 'http://<server>/job/<jobname>/<buildid>/consoleText'
$ wc -l consoleText
37521
Why does urlopen not give me the full result?
UPDATE:
Using requests (as suggested by #svrist) instead of urllib2 doesn't have such a problem, so I'm switching to it. My new code is:
data = requests.get('http://<server>/job/<jobname>/<buildid>/consoleText')
lines = [l for l in data.iter_lines()]
But I still have no idea why urllib2.urlopen doesn't work properly.
The Jenkins build log is returned using a chunked encoding response.
Transfer-Encoding: chunked
Based on a couple of other questions, it seems like urllib2 does not handle the entire response and as you've observed, only returns the first chunk.
I also recommend using the requests package.
I would like to download a zip file from internet and extract it.
I would rather use requests. I don't want to write to the disk.
I knew how to do that in Python2 but I am clueless for python3.3. Apparently, zipfile.Zipfile wants a file-like object but I don't know how to get that from what requests returns.
If you know how to do it with urllib.request, I would be curious to see how you do it too.
I found out how to do it:
request = requests.get(url)
file = zipfile.ZipFile(BytesIO(request.content))
What I was missing :
request.content should be used to access the bytes
io.BytesIO is the correct file-like object for bytes.
Here's another approach saving you having to install requests:
r = urllib.request.urlopen(req)
with zipfile.ZipFile(BytesIO(r.read())) as z:
print( z.namelist() )
Using Requests, this can be done very simply.
import requests, zipfile, StringIO
response = requests.get(zip_file_url)
zipDocument = zipfile.ZipFile(StringIO.StringIO(response.content))
Using String.IO you can make a file-like object for the responses content attribute.
If you want to extract to directory you can use the ZipFile's extractall() function
zipDocment.extractall()
OK, this is driving me nuts.
I am trying to read from the Crunchbase API using Python's Urllib2 library. Relevant code:
api_url="http://api.crunchbase.com/v/1/financial-organization/venrock.js"
len(urllib2.urlopen(api_url).read())
The result is either 73493 or 69397. The actual length of the document is much longer. When I try this on a different computer, the length is either 44821 or 40725. I've tried changing the user-agent, using Urllib, increasing the time-out to a very large number, and reading small chunks at a time. Always the same result.
I assumed it was a server problem, but my browser reads the whole thing.
Python 2.7.2, OS X 10.6.8 for the ~40k lengths. Python 2.7.1 running as iPython for the ~70k lengths, OS X 10.7.3. Thoughts?
There is something kooky with that server. It might work if you, like your browser, request the file with gzip encoding. Here is some code that should do the trick:
import urllib2, gzip
api_url='http://api.crunchbase.com/v/1/financial-organization/venrock.js'
req = urllib2.Request(api_url)
req.add_header('Accept-encoding', 'gzip')
resp = urllib2.urlopen(req)
data = resp.read()
>>> print len(data)
26610
The problem then is to decompress the data.
from StringIO import StringIO
if resp.info().get('Content-Encoding') == 'gzip':
g = gzip.GzipFile(fileobj=StringIO(data))
data = g.read()
>>> print len(data)
183159
I'm not sure if this is a valid answer, since it's a different module entirely but using the requests module, I get a ~183k response:
import requests
url = r'http://api.crunchbase.com/v/1/financial-organization/venrock.js'
r = requests.get(url)
print len(r.text)
>>>183159
So if it's not too late into the project, check it out here: http://docs.python-requests.org/en/latest/index.html
edit: Using the code you provided, I also get a len of ~36k
Did a quick search and found this: urllib2 not retrieving entire HTTP response
I'm currently trying to initiate a file upload with urllib2 and the urllib2_file library. Here's my code:
import sys
import urllib2_file
import urllib2
URL='http://aquate.us/upload.php'
d = [('uploaded', open(sys.argv[1:]))]
req = urllib2.Request(URL, d)
u = urllib2.urlopen(req)
print u.read()
I've placed this .py file in my My Documents directory and placed a shortcut to it in my Send To folder (the shortcut URL is ).
When I right click a file, choose Send To, and select Aquate (my python), it opens a command prompt for a split second and then closes it. Nothing gets uploaded.
I knew there was probably an error going on so I typed the code into CL python, line by line.
When I ran the u=urllib2.urlopen(req) line, I didn't get an error;
alt text http://www.aquate.us/u/55245858877937182052.jpg
instead, the cursor simply started blinking on a new line beneath that line. I waited a couple of minutes to see if something would happen but it just stayed like that. To get it to stop, I had to press ctrl+break.
What's up with this script?
Thanks in advance!
[Edit]
Forgot to mention -- when I ran the script without the request data (the file) it ran like a charm. Is it a problem with urllib2_file?
[edit 2]:
import MultipartPostHandler, urllib2, cookielib,sys
import win32clipboard as w
cookies = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookies),MultipartPostHandler.MultipartPostHandler)
params = {"uploaded" : open("c:/cfoot.js") }
a=opener.open("http://www.aquate.us/upload.php", params)
text = a.read()
w.OpenClipboard()
w.EmptyClipboard()
w.SetClipboardText(text)
w.CloseClipboard()
That code works like a charm if you run it through the command line.
If you're using Python 2.5 or newer, urllib2_file is both unnecessary and unsupported, so check which version you're using (and perhaps upgrade).
If you're using Python 2.3 or 2.4 (the only versions supported by urllib2_file), try running the sample code and see if you have the same problem. If so, there is likely something wrong either with your Python or urllib2_file installation.
EDIT:
Also, you don't seem to be using either of urllib2_file's two supported formats for POST data. Try using one of the following two lines instead:
d = ['uploaded', open(sys.argv[1:])]
## --OR-- ##
d = {'uploaded': open(sys.argv[1:])}
First, there's a third way to run Python programs.
From cmd.exe, type python myprogram.py. You get a nice log. You don't have to type stuff one line at a time.
Second, check the urrlib2 documentation. You'll need to look at urllib, also.
A Request requires a URL and a urlencoded encoded buffer of data.
data should be a buffer in the
standard
application/x-www-form-urlencoded
format. The urllib.urlencode()
function takes a mapping or sequence
of 2-tuples and returns a string in
this format.
You need to encode your data.
If you're still on Python2.5, what worked for me was to download the code here:
http://peerit.blogspot.com/2007/07/multipartposthandler-doesnt-work-for.html
and save it as MultipartPostHandler.py
then use:
import urllib2, MultipartPostHandler
opener = urllib2.build_opener(MultipartPostHandler.MultipartPostHandler())
opener.open(url, {"file":open(...)})
or if you need cookies:
import urllib2, MultipartPostHandler, cookielib
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj), MultipartPostHandler.MultipartPostHandler())
opener.open(url, {"file":open(...)})