Anyone here experienced with Requests and HTTP streaming with Chunked Data encoding.
I'm wondering if Requests inherently knows the chunk size provided by the server, and uses it in requests.iter_lines() as the chunk size. I'm finding if i reduce the default chunk size, it processes faster, but is there any correlation to what the server sends back and i shouldn't be monkeying around with setting it. Note, i'm eating social data feeds from DataSift in real time and ultimately shooting them to standard out.
code is:
#!/usr/bin/env python
import requests
import json
headers={'Auth': 'username:api_key'}
r = requests.get('http://stream.datasift.com/988098098sd09fsd89fsd0f7',headers=headers, stream=True)
for line in r.iter_lines(chunk_size=128):
if line:
print line
Looking at the source code (models.py line 531 and 31), the preconfigured value of 512 is simply a "sane default".
Related
I am trying to use requests to pull information from the NPI API but it is taking on average over 20 seconds to pull the information. If I try and access it via my web browser it takes less than a second. I'm rather new to this and any help would be greatly appreciated. Here is my code.
import json
import sys
import requests
url = "https://npiregistry.cms.hhs.gov/api/?number=&enumeration_type=&taxonomy_description=&first_name=&last_name=&organization_name=&address_purpose=&city=&state=&postal_code=10017&country_code=&limit=&skip="
htmlfile=requests.get(url)
data = htmlfile.json()
for i in data["results"]:
print(i)
This might be due to the response being incorrectly formatted, or due to requests taking longer than necessary to set up the request. To solve these issues, read on:
Server response formatted incorrectly
A possible issue might be that the response parsing is actually the offending line. You can check this by not reading the response you receive from the server. If the code is still slow, this is not your problem, but if this fixed it, the problem might lie with parsing the response.
In case some headers are set incorrectly, this can lead to parsing errors which prevents chunked transfer (source).
In other cases, setting the encoding manually might resolve parsing problems (source).
To fix those, try:
r = requests.get(url)
r.raw.chunked = True # Fix issue 1
r.encoding = 'utf-8' # Fix issue 2
print(response.text)
Setting up the request takes long
This is mainly applicable if you're sending multiple requests in a row. To prevent requests having to set up the connection each time, you can utilize a requests.Session. This makes sure the connection to the server stays open and configured and also persists cookies as a nice benefit. Try this (source):
import requests
session = requests.Session()
for _ in range(10):
session.get(url)
Didn't solve your issue?
If that did not solve your issue, I have collected some other possible solutions here.
The webApp I'm currently developing requires large JSON files to requested by the client, built on the server using Python and sent back to the client. The solution is implemented via CGI, and is working correctly in every way.
At this stage I'm just employing various techniques to minimize the size of the resulting JSON objects sent back to the client which are around 5-10mb ( Without going into detail, this is more or less fixed, and cannot be lazy loaded in any way).
The host we're using doesn't support mod_deflate or mod_gzip, so while we can't configure Apache to automatically create gzipped content on the server with .htaccess, I figure we'll still be able to receive it and decode on the client side as long as the Content-encoding header is set correctly.
What I was wondering, is what is the best way to achieve this. Gzipping something in Python is trivial. I already know how to do that, but the problem is:
How do I compress the data in such a way, that printing it to the output stream to send via CGI will be both compressed, and readable to the client?
The files have to be created on the fly, based upon input data, so storing premade and prezipped files is not an option, and they have to be received via xhr in the webApp.
My initial experiments with compressing the JSON string with gzip and io.stringIO, then printing it to the output stream caused it to be printed in Python's normal byte format eg: b'\n\x91\x8c\xbc\xd4\xc6\xd2\x19\x98\x14x\x0f1q!\xdc|C\xae\xe0 and such, which bloated the request to twice it's normal size...
I was wondering if someone could point me in the right direction here with how I could accomplish this, if it is indeed possible.
I hope I've articulated my problem correctly.
Thank you.
I guess you use print() (which first converts its argument to a string before sending it to stdout) or sys.stdout (which only accepts str objects).
To write directly on stdout, you can use sys.stdout.buffer, a file-like object that supports bytes objects:
import sys
import gzip
s = 'foo'*100
sys.stdout.buffer.write(gzip.compress(s.encode()))
Which gives valid gzip data:
$ python3 foo.py | gunzip
foofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoo
Thanks for the answers Valentin and Phillip!
I managed to solve the issue, both of you contributed to the final answer. Turns out it was a combination of things.
Here's the final code that works:
response = json.JSONEncoder().encode(loadData)
sys.stdout.write('Content-type: application/octet-stream\n')
sys.stdout.write('Content-Encoding: gzip\n\n')
sys.stdout.flush()
sys.stdout.buffer.write(gzip.compress(response.encode()))
After switching over to sys.stdout instead of using print to print the headers, and flushing the stream it managed to read correctly. Which is pretty curious... Always something more to learn.
Thanks again!
I want to write code to transfer a file from one site to another. This can be a large file, and I'd like to do it without creating a local temporary file.
I saw the trick of using mmap to upload a large file in Python: "HTTP Post a large file with streaming", but what I really need is a way to link up the response from the GET to creating the POST.
Anyone done this before?
You can't, or at least shouldn't.
urllib2 request objects have no way to stream data into them on the fly, period. And in the other direction, response objects are file-like objects, so in theory you can read(8192) out of them instead of read(), but for most protocols—including HTTP—it will either often or always read the whole response into memory and serve your read(8192) calls out of its buffer, making it pointless. So, you have to intercept the request, steal the socket out of it, and deal with it manually, at which point urllib2 is getting in your way more than it's helping.
urllib2 makes some things easy, some things much harder than they should be, and some things next to impossible; when it isn't making things easy, stop using it.
One solution is to use a higher-level third-party library. For example, requests gets you half-way there (it makes it very easy to stream from a response, but can only stream into a response in limited situations), and requests-toolbelt gets you the rest of the way there (it adds various ways to stream-upload).
The other solution is to use a lower-level library. And here, you don't even have to leave the stdlib. httplib forces you to think in terms of sending and receiving things bit by bit, but that's exactly what you want. On the get request, you can just call connect and request, and then call read(8192) repeatedly on the response object. On the post request, you call connect, putrequest, putheader, endheaders, then repeatedly send each buffer from the get request, then getresponse when you're done.
In fact, in Python 3.2+'s http.client (the equivalent of 2.x's httplib), HTTPClient.request doesn't have to be a string, it can be any iterable or any file-like object with read and fileno methods… which includes an response object. So, it's this simple:
import http.client
getconn = httplib.HTTPConnection('www.example.com')
getconn.request('GET', 'http://www.example.com/spam')
getresp = getconn.getresponse()
getconn = httplib.HTTPConnection('www.example.com')
getconn.request('POST', 'http://www.example.com/eggs', body=getresp)
getresp = getconn.getresponse()
… except, of course, that you probably want to craft appropriate headers (you can actually use urllib.request, the 3.x version of urllib2, to build a Request object and not send it…), and pull the host and port out of the URL with urlparse instead of hardcoding them, and you want to exhaust or at least check the response from the POST request, and so on. But this shows the hard part, and it's not hard.
Unfortunately, I don't think this works in 2.x.
Finally, if you're familiar with libcurl, there are at least three wrappers for it (including one that comes with the source distribution). I'm not sure whether to call libcurl higher-level or lower-level than urllib2, it's sort of on its own weird axis of complexity. :)
urllib2 may be too simple for this task. You might want to look into pycurl. I know it supports streaming.
I have some Ring routes which I'm running one of two ways.
lein ring server, with the lein-ring plugin
using org.httpkit.server, like (hs/run-server app {:port 3000}))
It's a web app (being consumed by an Angular.js browser client).
I have some API tests written in Python using the Requests library:
my_r = requests.post(MY_ROUTE,
data=MY_DATA,
headers={"Content-Type": "application/json"},
timeout=10)
When I use lein ring server, this request works fine in the JS client and the Python tests.
When I use httpkit, this works fine in the JS client but the Python client times out with
socket.timeout: timed out
I can't figure out why the Python client is timing out. It happens with httpkit but not with lein-ring, so I can only assume that the cause is related to the difference.
I've looked at the traffic in WireShark and both look like they give the correct response. Both have the same Content-Length field (15 bytes).
I've raised the number of threads to 10 (shouldn't need to) and no change.
Any ideas what's wrong?
I found how to fix this, but no satisfactory explanation.
I was using wrap-json-response Ring middleware to take a HashMap and convert it to JSON. I switched to doing my own conversion in my handler with json/write-str, and this fixes it.
At a guess it might be something to do with the server handling output buffering, but that's speculation.
I've combed through the Wireshark dumps and I can't see any relevant differences between the two. The sent Content-Length fields are identical. The 'bytes in flight' differ, at 518 and 524.
No clue as to why the web browser was happy with this but Python Requests wasn't, and whether or this is a bug in Requests, httpkit, ring-middleware-format or my own code.
I'm working on a project that involves streaming .OGG (or .mp3) files from my webserver. I'd prefer not to have to download the whole file and then play it, is there a way to do that in pure Python (no GStreamer - hoping to make it truly cross platform)? Is there a way to use urllib to download the file chunks at a time and load that into, say, PyGame to do the actual audio playing?
Thanks!
I suppose your server supports Range requests. You ask the server by header Range with start byte and end byte of the range you want:
import urllib2
req = urllib2.Request(url)
req.headers['Range'] = 'bytes=%s-%s' % (startByte, endByte)
f = urllib2.urlopen(req)
f.read()
You can implement a file object and always download just a needed chunk of file from server. Almost every library accepts a file object as input.
It will be probably slow because of a network latency. You would need to download bigger chunks of the file, preload the file in a separate thread, etc. In other words, you would need to implement the streaming client logic yourself.