Storing gzipped data in redis - python

I have a huge python dictionary that i want to save to redis cache and then have an API handler return this dictionary straight from cache
Im using gzip to compress the stringified dict first before storing in cache
transformed_object = {...big dictionary}
byte_object = BytesIO()
data = json.dumps(transformed_object)
with gzip.GzipFile(fileobj=byte_object, mode="w") as f:
f.write(data.encode())
final_data = byte_object.getvalue()
I write this to Redis cache
context.redis.set(COMPLETE_GZIPPED_CACHE, final_data)
I have an API handler where I want to return the gzipped data
cache_list = redis.get(COMPLETE_GZIPPED_CACHE)
self.finish(
{
"status": True,
"cache_list": cache_list,
"updated_at": datetime.datetime.now(),
}
)
The problem is I'm getting the below error
TypeError: Object of type 'bytes' is not JSON serializable
Do i need to decode the bytes first back to string before returning to the frontend? ideally i would like the frontend to handle the decoding
Is there a better way to do this?

Figured it out from other posts - wrote a function like this and opted to use zlib
def convert_to_gzip_format(dict):
stringified_object = json.dumps(dict).encode("utf-8")
compressed_file = zlib.compress(stringified_object)
base64_string = base64.b64encode(compressed_file).decode("ascii")
return base64_string
This saves it as an ascii string to redis. I then use pako.js in the frontend to decode the above into a readable string.

Related

Storing response as json in Python requests-cache

I'm using requests-cache to cache http responses in human-readable format.
I've patched requests using the filesystem backend, and the the serializer to json, like so:
import requests_cache
requests_cache.install_cache('example_cache', backend='filesystem', serializer='json')
The responses do get cached as json, but the response's body is encoded (I guess using the cattrs library, as described here).
Is there a way to make requests-cache save responses as-is?
What you want to do makes sense, but it's a bit more complicated than it appears. The response files you see are representations of requests.Response objects. Response._content contains the original bytes received from the server. The wrapper methods and properties like Response.json() and Response.text will then attempt to decode that content. For a Response object to work correctly, it needs to have the original binary response body.
When requests-cache serializes that response as JSON, the binary content is encoded in Base85. That's why you're seeing encoded bytes instead of JSON there. To have everything including the response body saved in JSON, there are a couple options:
Option 1
Make a custom serializer. If you wanted to be able to modify response content and have those changes reflected in responses returned by requests-cache, this would probably be the best way to do it.
This may be become a bit convoluted, because you would have to:
Handle response content that isn't valid JSON, and save as encoded bytes instead
During deserialization, if the content was saved as JSON, convert it back into bytes to recreate the original Response object
It's doable, though. I could try to come up with an example later, if needed.
Option 2
Make a custom backend. It could extend FileCache and FileDict, and copy valid JSON content to a separate file. Here is a working example:
import json
from os.path import splitext
from requests import Response
from requests_cache import CachedSession, FileCache, FileDict
class JSONFileCache(FileCache):
"""Filesystem backend that copies JSON-formatted response content into a separate file
alongside the main response file
"""
def __init__(self, cache_name, **kwargs):
super().__init__(cache_name, **kwargs)
self.responses = JSONFileDict(cache_name, **kwargs)
class JSONFileDict(FileDict):
def __setitem__(self, key: str, value: Response):
super().__setitem__(key, value)
response_path = splitext(self._path(key))[0]
json_path = f'{response_path}_content.json'
# Will handle errors and skip writing if content can't be decoded as JSON
with self._try_io(ignore_errors=True):
content = json.dumps(value.json(), indent=2)
with open(json_path, mode='w') as f:
f.write(content)
Usage example:
custom_backend = JSONFileCache('example_cache', serializer='json')
session = CachedSession(backend=custom_backend)
session.get('https://httpbin.org/get')
After making a request, you will see a pair of files like:
example_cache/680f2a52944ee079.json
example_cache/680f2a52944ee079_content.json
That may not be exactly what you want, but it's the easiest option if you only need to read the response content and don't need to modify it.

S3 InvalidDigest when calling the PutObject operation [duplicate]

I have tried to upload an XML File to S3 using boto3. As recommended by Amazon, I would like to send a Base64 Encoded MD5-128 Bit Digest(Content-MD5) of the data.
https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Object.put
My Code:
with open(file, 'rb') as tempfile:
body = tempfile.read()
tempfile.close()
hash_object = hashlib.md5(body)
base64_md5 = base64.encodebytes(hash_object.digest())
response = s3.Object(self.bucket, self.key + file).put(
Body=body.decode(self.encoding),
ACL='private',
Metadata=metadata,
ContentType=self.content_type,
ContentEncoding=self.encoding,
ContentMD5=str(base64_md5)
)
When i try this the str(base64_md5) create a string like 'b'ZpL06Osuws3qFQJ8ktdBOw==\n''
In this case, I get this Error Message:
An error occurred (InvalidDigest) when calling the PutObject operation: The Content-MD5 you specified was invalid.
For Test purposes I copied only the Value without the 'b' in front: 'ZpL06Osuws3qFQJ8ktdBOw==\n'
Then i get this Error Message:
botocore.exceptions.HTTPClientError: An HTTP Client raised and unhandled exception: Invalid header value b'hvUe19qHj7rMbwOWVPEv6Q==\n'
Can anyone help me how to save Upload a File to S3?
Thanks,
Oliver
Starting with #Isaac Fife's example, stripping it down to identify what's required vs not, and to include imports and such to make it a full reproducible example:
(the only change you need to make is to use your own bucket name)
import base64
import hashlib
import boto3
contents = "hello world!"
md = hashlib.md5(contents.encode('utf-8')).digest()
contents_md5 = base64.b64encode(md).decode('utf-8')
boto3.client('s3').put_object(
Bucket="mybucket",
Key="test",
Body=contents,
ContentMD5=contents_md5
)
Learnings: first, the MD5 you are trying to generate will NOT look like what an 'upload' returns. We actually need a base64 version, it returns a md.hexdigest() version. hex is base16, which is not base64.
(Python 3.7)
Took me hours to figure this out because the only error you get is "The Content-MD5 you specified was invalid." Super useful for debugging... Anyway, here is the code I used to actually get the file to upload correctly before refactoring.
json_results = json_converter.convert_to_json(result)
json_results_utf8 = json_results.encode('utf-8')
content_md5 = md5.get_content_md5(json_results_utf8)
content_md5_string = content_md5.decode('utf-8')
metadata = {
"md5chksum": content_md5_string
}
s3 = boto3.resource('s3', config=Config(signature_version='s3v4'))
obj = s3.Object(bucket, 'filename.json')
obj.put(
Body=json_results_utf8,
ContentMD5=content_md5_string,
ServerSideEncryption='aws:kms',
Metadata=metadata,
SSEKMSKeyId=key_id)
and the hashing
def get_content_md5(data):
digest = hashlib.md5(data).digest()
return base64.b64encode(digest)
The hard part for me was figuring out what encoding you need at each step in the process and not being very familiar with how strings are stored in python at the time.
get_content_md5 takes a utf-8 bytes-like object only, and returns the same. But to pass the md5 hash to aws, it needs to be a string. You have to decode it before you give it to ContentMD5.
Pro-tip - Body on the other hand, needs to be given bytes or a seekable object. Make sure if you pass a seekable object that you seek(0) to the beginning of the file before you pass it to AWS or the MD5 will not match. For that reason, using bytes is less error prone, imo.

How to speed up returning a 20MB Json file from a Python-Flask application?

I am trying to call an API which in turn triggers a store procedure from our sqlserver database. This is how I coded it.
class Api_Name(Resource):
def __init__(self):
pass
#classmethod
def get(self):
try:
engine = database_engine
connection = engine.connect()
sql = "DECLARE #return_value int EXEC #return_value = [dbname].[dbo].[proc_name])
return call_proc(sql, apiname, starttime, connection)
except Exception as e:
return {'message': 'Proc execution failed with error => {error}'.format(error=e)}, 400
pass
call_proc is the method where I return the JSON from database.
def call_proc(sql: str, connection):
try:
json_data = []
rv = connection.execute(sql)
for result in rv:
json_data.append(dict(zip(result.keys(), result)))
return Response(json.dumps(json_data), status=200)
except Exception as e:
return {'message': '{error}'.format(error=e)}, 400
finally:
connection.close()
The problem with the output is the way JSON is returned and the size of it.
At first the API used to take 1minute 30seconds: when the return statement was like this:
case1: return Response(json.dumps(json_data), status=200, mimetype='application/json')
After looking online, I found that the above statement is trying to prettify JSON. So I removed mimetype from the response & made it as
case2: return Response(json.dumps(json_data), status=200)
The API runs for 30seconds, although the JSON output is not aligned properly but its still JSON.
I see the output size of the JSON returned from the API is close 20MB. I observed this on postman response:
Status: 200 OK Time: 29s Size: 19MB
The difference in Json output:
case1:
[ {
"col1":"val1",
"col2":"val2"
},
{
"col1":"val1",
"col2":"val2"
}
]
case2:
[{"col1":"val1","col2":"val2"},{"col1":"val1","col2":"val2"}]
Will the difference in output from the two aforementioned cases are different ? If so, how can I fix the problem ?
If there is no difference, is there any way I speed up this further and reduce the run time further more, like compressing the JSON which I am returning ?
You can use gzip compression to make your plain text weight from Megabytes to even Kilobytes. Or even use flask-compress library for that.
Also I'd suggest to use ujson to make dump() call faster.
import gzip
from flask import make_response
import ujson as json
#app.route('/data.json')
def compress():
compression_level = 5 # of 9 max
data = [
{"col1": "val1", "col2": "val2"},
{"col1": "val1", "col2": "val2"}
]
content = gzip.compress(json.dumps(data).encode('utf8'), compression_level)
response = make_response(content)
response.headers['Content-length'] = len(content)
response.headers['Content-Encoding'] = 'gzip'
return response
Documentation:
https://docs.python.org/3/library/gzip.html
https://github.com/colour-science/flask-compress
https://pypi.org/project/ujson/
First of all, profile: if 90% the time is being spent transferring across the network then optimising processing speed is less useful than optimising transfer speed (for example, by compressing the response as wowkin recommended (though the web server may be configured to do this automatically, if you are using one)
Assuming that constructing the JSON is slow, if you control the database code you could use its JSON capabilities to serialise the data, and avoid doing it at the Python layer. For example,
SELECT col1, col2
FROM tbl
WHERE col3 > 42
FOR JSON AUTO
would give you
[
{
"col1": "foo",
"col2": 1
},
{
"col1": "bar",
"col2": 2
},
...
]
Nested structures can be created too, described in the docs.
If the requester only needs the data, return it as a download using flask's send_file feature and avoid the cost of constructing an HTML response:
from io import BytesIO
from flask import send_file
def call_proc(sql: str, connection):
try:
rv = connection.execute(sql)
json_data = rv.fetchone()[0]
# BytesIO expects encoded data; if you can get the server to encode
# the data instead it may be faster.
encoded_json = json_data.encode('utf-8')
buf = BytesIO(encoded_json)
return send_file(buf, mimetype='application/json', as_attachment=True, conditional=True)
except Exception as e:
return {'message': '{error}'.format(error=e)}, 400
finally:
connection.close()
You need to implement pagination on your API. 19MB is absurdly large and will lead to some very annoyed users.
gzip and clevererness with the JSON responses will sadly not be enough, you'll need to put in a bit more legwork.
Luckily, there's many pagination questions and answers, and Flasks modular approach to things will mean that someone probably wrote up a module that's applicable to your problem. I'd start off by re-implementing the method with an ORM. I heard that sqlalchemy is quite good.
To answer your question:
1 - Both JSON are semantically identical.
You can make use of http://www.jsondiff.com to compare two JSON.
2 - I would recommend you to make chunks of your data and send it across network.
This might help:
https://masnun.com/2016/09/18/python-using-the-requests-module-to-download-large-files-efficiently.html
TL;DR; Try restructuring your JSON payload (i.e. change schema)
I see that you are constructing the JSON response in one of your APIs. Currently, your JSON payload looks something like:
[
{
"col0": "val00",
"col1": "val01"
},
{
"col0": "val10",
"col1": "val11"
}
...
]
I suggest you restructure it in such a way that each (first level) key in your JSON represents the entire column. So, for the above case, it will become something like:
{
"col0": ["val00", "val10", "val20", ...],
"col1": ["val01", "val11", "val21", ...]
}
Here are the results from some offline test I performed.
Experiment variables:
NUMBER_OF_COLUMNS = 10
NUMBER_OF_ROWS = 100000
LENGTH_OF_STR_DATA = 5
#!/usr/bin/env python3
import json
NUMBER_OF_COLUMNS = 10
NUMBER_OF_ROWS = 100000
LENGTH_OF_STR_DATA = 5
def get_column_name(id_):
return 'col%d' % id_
def random_data():
import string
import random
return ''.join(random.choices(string.ascii_letters, k=LENGTH_OF_STR_DATA))
def get_row():
return {
get_column_name(i): random_data()
for i in range(NUMBER_OF_COLUMNS)
}
# data1 has same schema as your JSON
data1 = [
get_row() for _ in range(NUMBER_OF_ROWS)
]
with open("/var/tmp/1.json", "w") as f:
json.dump(data1, f)
def get_column():
return [random_data() for _ in range(NUMBER_OF_ROWS)]
# data2 has the new proposed schema, to help you reduce the size
data2 = {
get_column_name(i): get_column()
for i in range(NUMBER_OF_COLUMNS)
}
with open("/var/tmp/2.json", "w") as f:
json.dump(data2, f)
Comparing sizes of the two JSONs:
$ du -h /var/tmp/1.json
17M
$ du -h /var/tmp/2.json
8.6M
In this case, it almost got reduced by half.
I would suggest you do the following:
First and foremost, profile your code to see the real culprit. If it is really the payload size, proceed further.
Try to change your JSON's schema (as suggested above)
Compress your payload before sending (either from your Flask WSGI app layer or your webserver level - if you are running your Flask app behind some production grade webserver like Apache or Nginx)
For large data that you can't paginate using something like ndjson (or any type of delimited record format) can really reduce the server resources needed since you'd be preventing holding the JSON object in memory. You would need to get access to the response stream to write each object/line to the response though.
The response
[ {
"col1":"val1",
"col2":"val2"
},
{
"col1":"val1",
"col2":"val2"
}
]
Would end up looking like
{"col1":"val1","col2":"val2"}
{"col1":"val1","col2":"val2"}
This also has advantages on the client since you can parse and process each line on it's own as well.
If you aren't dealing with nested data structures responding with a CSV is going to be even smaller.
I want to note that there is a standard way to write a sequence of separate records in JSON, and it's described in RFC 7464. For each record:
Write the record separator byte (0x1E).
Write the JSON record, which is a regular JSON document that can also contain inner line breaks, in UTF-8.
Write the line feed byte (0x0A).
(Note that the JSON text sequence format, as it's called, uses a more liberal syntax for parsing text sequences of this kind; see the RFC for details.)
In your example, the JSON text sequence would look as follows, where \x1E and \x0A are the record separator and line feed bytes, respectively:
\x1E{"col1":"val1","col2":"val2"}\x0A\x1E{"col1":"val1","col2":"val2"}\x0A
Since the JSON text sequence format allows inner line breaks, you can write each JSON record as you naturally would, as in the following example:
\x1E{
"col1":"val1",
"col2":"val2"}
\x0A\x1E{
"col1":"val1",
"col2":"val2"
}\x0A
Notice that the media type for JSON text sequences is not application/json, but application/json-seq; see the RFC.

Reading JSON file from S3 with AWS Lambda

First of all, I'm new to AWS so I apologize if the question is very simple or not explained properly.
I'm trying to read a JSON file stored in a S3 bucket with an AWS lambda function.
My main problem is that I am completely unable to extract the information from it.
This is my code:
**
import json
import boto3
def lambda_handler(event, context):
BUCKET = 'BUCKET'
KEY = 'KEY.json'
client = boto3.client('s3')
result = client.get_object(Bucket=BUCKET, Key=KEY)
# Read the object
text = result['Body'].read()#.decode('utf-8')
#convert to string
text_str = str(text)
text_str = text_str.replace('\r\n', '')
print(text_str)
**
If i use decode('utf-8'), I get: "errorMessage": "'utf-8' codec can't decode byte 0xba in position 976: invalid start byte".
If i don't, i get the JSON file but like this:
'{\r\n "id": 0,\r\n "uid": "uid",\r\n "name": "User",\r\n "last": "Candidate"}'
I am stuck here, because the .replace doesn't work and I don't know how to use what i get and access to it as in a standard JSON.
Thank you in advance.
Update: looks like the main problem is that I have ASCii characters like 'รก' in the JSON file. Now, I get something like this (I just show a part of the json):
'{"id": 0,"uid": "uid","name": "User","last": "Candidate"}'
I've tried ast.literal_eval to get rid of the '' and access the dictionary, and also json.dumps to try to avoid the problem with the ASCii characters, but nothing has worked.

upload binary data or file from memory using python http post

I have seen the receipes for uploading files via multipartform-data and pycurl. Both methods seem to require a file on disk. Can these recipes be modified to supply binary data from memory instead of from disk ? I guess I could just use a xmlrpc server instead.I wanted to get around having to encode and decode the binary data and send it raw... Do pycurl and mutlipartform-data work with raw data ?
This (small) library will take a file descriptor, and will do the HTTP POST operation: https://github.com/seisen/urllib2_file/
You can pass it a StringIO object (containing your in-memory data) as the file descriptor.
Find something that cas work with a file handle. Then simply pass a StringIO object instead of a real file descriptor.
I met similar issue today, after tried both and pycurl and multipart/form-data, I decide to read python httplib/urllib2 source code to find out, I did get one comparably good solution:
set Content-Length header(of the file) before doing post
pass a opened file when doing post
Here is the code:
import urllib2, os
image_path = "png\\01.png"
url = 'http://xx.oo.com/webserviceapi/postfile/'
length = os.path.getsize(image_path)
png_data = open(image_path, "rb")
request = urllib2.Request(url, data=png_data)
request.add_header('Cache-Control', 'no-cache')
request.add_header('Content-Length', '%d' % length)
request.add_header('Content-Type', 'image/png')
res = urllib2.urlopen(request).read().strip()
return res
see my blog post: http://www.2maomao.com/blog/python-http-post-a-binary-file-using-urllib2/
Following python code works reliable on 2.6.x. The input data is of type str.
Note that the server that receives the data has to loop to read all the data as the large
data sizes will be chunked. Also attached java code snippet to read the chunked data.
def post(self, url, data):
self.curl = pycurl.Curl()
self.response_headers = StringIO.StringIO()
self.response_body = io.BytesIO()
self.curl.setopt(pycurl.WRITEFUNCTION, self.response_body.write)
self.curl.setopt(pycurl.HEADERFUNCTION, self.response_headers.write)
self.curl.setopt(pycurl.FOLLOWLOCATION, 1)
self.curl.setopt(pycurl.MAXREDIRS, 5)
self.curl.setopt(pycurl.TIMEOUT, 60)
self.curl.setopt(pycurl.ENCODING,"deflate, gzip")
self.curl.setopt(pycurl.URL, url)
self.curl.setopt(pycurl.VERBOSE, 1)
self.curl.setopt(pycurl.POST,1)
self.curl.setopt(pycurl.POSTFIELDS,data)
self.curl.setopt(pycurl.HTTPHEADER, [ "Content-Type: octate-stream" ])
self.curl.setopt(pycurl.POSTFIELDSIZE, len(data))
self.curl.perform()
return url, self.curl.getinfo(pycurl.RESPONSE_CODE),self.response_headers.getvalue(), self.response_body.getvalue()
Java code for the servlet engine:
int postSize = Integer.parseInt(req.getHeader("Content-Length"));
results = new byte[postSize];
int read = 0;
while(read < postSize) {
int n = req.getInputStream().read(results);
if (n < 0) break;
read += n;
}
found a solution that works with the cherrypy file upload example: urllib2-binary-upload.py
import io # Part of core Python
import requests # Install via: 'pip install requests'
# Get the data in bytes. I got it via:
# with open("smile.gif", "rb") as fp: data = fp.read()
data = b"GIF89a\x12\x00\x12\x00\x80\x01\x00\x00\x00\x00\xff\x00\x00!\xf9\x04\x01\n\x00\x01\x00,\x00\x00\x00\x00\x12\x00\x12\x00\x00\x024\x84\x8f\x10\xcba\x8b\xd8ko6\xa8\xa0\xb3Wo\xde9X\x18*\x15x\x99\xd9'\xad\x1b\xe5r(9\xd2\x9d\xe9\xdd\xde\xea\xe6<,\xa3\xd8r\xb5[\xaf\x05\x1b~\x8e\x81\x02\x00;"
# Hookbin is similar to requestbin - just a neat little service
# which helps you to understand which queries are sent
url = 'https://hookb.in/je2rEl733Yt9dlMMdodB'
in_memory_file = io.BytesIO(data)
response = requests.post(url, files=(("smile.gif", in_memory_file),))

Categories