How to convert binary data to json - python

I want to convert the below data to json in python.
I have the data in the following format.
b'{"id": "1", "name": " value1"}\n{"id":"2", name": "value2"}\n{"id":"3", "name": "value3"}\n'
This has multiple json objects separated by \n. I was trying to load this as json .
converted the data into string first and loads as json but getting the exception.
my_json = content.decode('utf8')
json_data = json.loads(my_json)
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 2306)

You need to decode it then split by '\n' and load each json object separately. If you store your byte string in a variable called byte_string you could do something like:
json_str = byte_string.decode('utf-8')
json_objs = json_str.split('\n')
for obj in json_objs:
json.loads(obj)
For the particular string that you have posted here though, you will get an error on the second object because the second key in it is missing a double quote. It is name" in the string you linked.

First, this isn't valid json since it's not a single object. Second, there is a typo: the "id":"2" entry is missing a double-quote on the name property element.
An alternative to processing one dict at a time, you can replace the newlines with "," and turn it into an array. This is a fragile solution since it requires exactly one newline between each dict, but is compact:
s = b'{"id": "1", "name": " value1"}\n{"id":"2", "name": "value2"}\n{"id":"3", "name": "value3"}\n'
my_json = s.decode('utf8')
json_data = json.loads("[" + my_json.rstrip().replace("\n", ",") + "]")

What have to first decode your json to a string. So you can just say:
your_json_string = the_json.decode()
now you have a string.
Now what you want to do is:
your_json_string = your_json_string.replace("\\n", "")
so you are replacing the \n with nothing basically. Note that the two backslashes are required, this is not a typo.
Now you can just say:
your_json = json.loads(your_json_string)

Related

Within JSON string I have text variables with "quotes" giving JSONDecodeError: Expecting ',' delimiter: line 1 column 1712398

Example of data = [{"name":"Jamie Andersen","role":"Head of Laboratory "Synthestech" ","photo":""},{"name":"freddie nelof","role":"some text","photo":""},///]
The actual data is made from API and contains a lot a data so it's not manageable manually.
Quotes within a text variable like "Synthestech" in this examples makes the json file unreadable and gives error code:
JSONDecodeError: Expecting ',' delimiter: line 1 column 1712398.
My code is currently:
with open("C:/xampp/htdocs/code/data.json") as f:
data_fuld = json.load(f)
df1 = pd.json_normalize(data_fuld)
#print(df1)
df2 = pd.DataFrame(df1)
EDIT:
it is allways the same variable causing trouble, so maybe it's possibly to delete it before it is read as json, since i do not need the variable "role"?
The best solution is to fix the upstream api so that it actually produces valid data.
Alternatively you can inspect the JSONDecodeError and try to fix the input data by escaping the " characters. This is a very hacky solution, and only work on a specific kind of wrong json.
import re, json
def parse(data):
try:
return json.loads(data)
except json.JSONDecodeError as err:
if not err.msg == "Expecting ',' delimiter":
raise
# insert a `\` before the last `"` in front of the syntax error position
escaped_data = re.sub(r'^(.*)"', r"\1\"", data[:err.pos]) + data[err.pos:]
return parse(escaped_data)

My String has embedded byte string - Python

Currently reading from s3 and saving within a dataframe.
Problem image:
S3 objects are read in as bytes however it seems within my string, the byte string is also there.
Unable to decode a string using - example_string.decode().
Another problem from this is trying to find emojis within the text. These are saved as UTF-8 and due to be saved as a byte string within a string, it adds extra \ etc.
I wish just the string with no additional byte string or any combination.
Any help would be appreciated.
bucket_iter = iter(bucket)
while (True) :
next_val = next(bucket_iter)
current_file = (next_val.get()['Body'].read())).decode('utf-8')
split_file = current_file.split(']')
for tweet in split_file:
a = tweet.split(',')
if (len(a) == 10):
a[0] = a[0][2:12]
new_row = {'date':a[0], 'tweet':a[1], 'user':a[2], 'cashtags':a[3],'number_cashtags':a[4],'Hashtags':a[5],'number_hashtags':a[6],'quoted_tweet':a[7],'urs_present':a[8],'spam':a[9]}
df = df.append(new_row, ignore_index=True)
example of a line in s3bucket
["2021-01-06 13:41:48", "Q1 2021 Earnings Estimate for The Walt Disney Company $DIS Issued By Truist Securiti https://t co/l5VSCCCgDF #stocks", "b'AmericanBanking'", "$DIS", "1", "#stocks'", "1", "False", "1", "0"]
Even though this is a string, it will keep the 'b' before the string, even though the item is a string. Just make a small bit of code to only keep what is inside the quotes.
def bytes_to_string(b):
return str(b)[2:-1]
EDIT: you could technically use regexes to do this, but this is a much more readable way of doing it (and shorter)

Parse the json output for specific value [duplicate]

I'll be receiving a JSON encoded string from Objective-C, and I am decoding a dummy string (for now) like the code below. My output comes out with character 'u' prefixing each item:
[{u'i': u'imap.gmail.com', u'p': u'aaaa'}, {u'i': u'333imap.com', u'p': u'bbbb'}...
How is JSON adding this Unicode character? What's the best way to remove it?
mail_accounts = []
da = {}
try:
s = '[{"i":"imap.gmail.com","p":"aaaa"},{"i":"imap.aol.com","p":"bbbb"},{"i":"333imap.com","p":"ccccc"},{"i":"444ap.gmail.com","p":"ddddd"},{"i":"555imap.gmail.com","p":"eee"}]'
jdata = json.loads(s)
for d in jdata:
for key, value in d.iteritems():
if key not in da:
da[key] = value
else:
da = {}
da[key] = value
mail_accounts.append(da)
except Exception, err:
sys.stderr.write('Exception Error: %s' % str(err))
print mail_accounts
The u- prefix just means that you have a Unicode string. When you really use the string, it won't appear in your data. Don't be thrown by the printed output.
For example, try this:
print mail_accounts[0]["i"]
You won't see a u.
Everything is cool, man. The 'u' is a good thing, it indicates that the string is of type Unicode in python 2.x.
http://docs.python.org/2/howto/unicode.html#the-unicode-type
The d3 print below is the one you are looking for (which is the combination of dumps and loads) :)
Having:
import json
d = """{"Aa": 1, "BB": "blabla", "cc": "False"}"""
d1 = json.loads(d) # Produces a dictionary out of the given string
d2 = json.dumps(d) # Produces a string out of a given dict or string
d3 = json.dumps(json.loads(d)) # 'dumps' gets the dict from 'loads' this time
print "d1: " + str(d1)
print "d2: " + d2
print "d3: " + d3
Prints:
d1: {u'Aa': 1, u'cc': u'False', u'BB': u'blabla'}
d2: "{\"Aa\": 1, \"BB\": \"blabla\", \"cc\": \"False\"}"
d3: {"Aa": 1, "cc": "False", "BB": "blabla"}
Those 'u' characters being appended to an object signifies that the object is encoded in Unicode.
If you want to remove those 'u' characters from your object, you can do this:
import json, ast
jdata = ast.literal_eval(json.dumps(jdata)) # Removing uni-code chars
Let's checkout from python shell
>>> import json, ast
>>> jdata = [{u'i': u'imap.gmail.com', u'p': u'aaaa'}, {u'i': u'333imap.com', u'p': u'bbbb'}]
>>> jdata = ast.literal_eval(json.dumps(jdata))
>>> jdata
[{'i': 'imap.gmail.com', 'p': 'aaaa'}, {'i': '333imap.com', 'p': 'bbbb'}]
Unicode is an appropriate type here. The JSONDecoder documentation describe the conversion table and state that JSON string objects are decoded into Unicode objects.
From 18.2.2. Encoders and Decoders:
JSON Python
==================================
object dict
array list
string unicode
number (int) int, long
number (real) float
true True
false False
null None
"encoding determines the encoding used to interpret any str objects decoded by this instance (UTF-8 by default)."
The u prefix means that those strings are unicode rather than 8-bit strings. The best way to not show the u prefix is to switch to Python 3, where strings are unicode by default. If that's not an option, the str constructor will convert from unicode to 8-bit, so simply loop recursively over the result and convert unicode to str. However, it is probably best just to leave the strings as unicode.
I kept running into this problem when trying to capture JSON data in the log with the Python logging library, for debugging and troubleshooting purposes. Getting the u character is a real nuisance when you want to copy the text and paste it into your code somewhere.
As everyone will tell you, this is because it is a Unicode representation, and it could come from the fact that you’ve used json.loads() to load in the data from a string in the first place.
If you want the JSON representation in the log, without the u prefix, the trick is to use json.dumps() before logging it out. For example:
import json
import logging
# Prepare the data
json_data = json.loads('{"key": "value"}')
# Log normally and get the Unicode indicator
logging.warning('data: {}'.format(json_data))
>>> WARNING:root:data: {u'key': u'value'}
# Dump to a string before logging and get clean output!
logging.warning('data: {}'.format(json.dumps(json_data)))
>>> WARNING:root:data: {'key': 'value'}
Try this:
mail_accounts[0].encode("ascii")
Just replace the u' with a single quote...
print (str.replace(mail_accounts,"u'","'"))

Convert a bytes array into JSON format

I want to parse a bytes string in JSON format to convert it into python objects. This is the source I have:
my_bytes_value = b'[{\'Date\': \'2016-05-21T21:35:40Z\', \'CreationDate\': \'2012-05-05\', \'LogoType\': \'png\', \'Ref\': 164611595, \'Classe\': [\'Email addresses\', \'Passwords\'],\'Link\':\'http://some_link.com\'}]'
And this is the desired outcome I want to have:
[{
"Date": "2016-05-21T21:35:40Z",
"CreationDate": "2012-05-05",
"LogoType": "png",
"Ref": 164611595,
"Classes": [
"Email addresses",
"Passwords"
],
"Link": "http://some_link.com"}]
First, I converted the bytes to string:
my_new_string_value = my_bytes_value.decode("utf-8")
but when I try to invoke loads to parse it as JSON:
my_json = json.loads(my_new_string_value)
I get this error:
json.decoder.JSONDecodeError: Expecting value: line 1 column 174 (char 173)
Your bytes object is almost JSON, but it's using single quotes instead of double quotes, and it needs to be a string. So one way to fix it is to decode the bytes to str and replace the quotes. Another option is to use ast.literal_eval; see below for details. If you want to print the result or save it to a file as valid JSON you can load the JSON to a Python list and then dump it out. Eg,
import json
my_bytes_value = b'[{\'Date\': \'2016-05-21T21:35:40Z\', \'CreationDate\': \'2012-05-05\', \'LogoType\': \'png\', \'Ref\': 164611595, \'Classe\': [\'Email addresses\', \'Passwords\'],\'Link\':\'http://some_link.com\'}]'
# Decode UTF-8 bytes to Unicode, and convert single quotes
# to double quotes to make it valid JSON
my_json = my_bytes_value.decode('utf8').replace("'", '"')
print(my_json)
print('- ' * 20)
# Load the JSON to a Python list & dump it back out as formatted JSON
data = json.loads(my_json)
s = json.dumps(data, indent=4, sort_keys=True)
print(s)
output
[{"Date": "2016-05-21T21:35:40Z", "CreationDate": "2012-05-05", "LogoType": "png", "Ref": 164611595, "Classe": ["Email addresses", "Passwords"],"Link":"http://some_link.com"}]
- - - - - - - - - - - - - - - - - - - -
[
{
"Classe": [
"Email addresses",
"Passwords"
],
"CreationDate": "2012-05-05",
"Date": "2016-05-21T21:35:40Z",
"Link": "http://some_link.com",
"LogoType": "png",
"Ref": 164611595
}
]
As Antti Haapala mentions in the comments, we can use ast.literal_eval to convert my_bytes_value to a Python list, once we've decoded it to a string.
from ast import literal_eval
import json
my_bytes_value = b'[{\'Date\': \'2016-05-21T21:35:40Z\', \'CreationDate\': \'2012-05-05\', \'LogoType\': \'png\', \'Ref\': 164611595, \'Classe\': [\'Email addresses\', \'Passwords\'],\'Link\':\'http://some_link.com\'}]'
data = literal_eval(my_bytes_value.decode('utf8'))
print(data)
print('- ' * 20)
s = json.dumps(data, indent=4, sort_keys=True)
print(s)
Generally, this problem arises because someone has saved data by printing its Python repr instead of using the json module to create proper JSON data. If it's possible, it's better to fix that problem so that proper JSON data is created in the first place.
You can simply use,
import json
json.loads(my_bytes_value)
Python 3.5 + Use io module
import json
import io
my_bytes_value = b'[{\'Date\': \'2016-05-21T21:35:40Z\', \'CreationDate\': \'2012-05-05\', \'LogoType\': \'png\', \'Ref\': 164611595, \'Classe\': [\'Email addresses\', \'Passwords\'],\'Link\':\'http://some_link.com\'}]'
fix_bytes_value = my_bytes_value.replace(b"'", b'"')
my_json = json.load(io.BytesIO(fix_bytes_value))
d = json.dumps(byte_str.decode('utf-8'))
To convert this bytesarray directly to json, you could first convert the bytesarray to a string with decode(), utf-8 is standard. Change the quotation markers.. The last step is to remove the " from the dumped string, to change the json object from string to list.
dumps(s.decode()).replace("'", '"')[1:-1]
Better solution is:
import json
byte_array_example = b'{"text": "\u0627\u06CC\u0646 \u06CC\u06A9 \u0645\u062A\u0646 \u062A\u0633\u062A\u06CC \u0641\u0627\u0631\u0633\u06CC \u0627\u0633\u062A."}'
res = json.loads(byte_array_example.decode('unicode_escape'))
print(res)
result:
{'text': 'این یک متن تستی فارسی است.'}
decode by utf-8 cannot decode unicode characters. The right solution is uicode_escape
It is OK
if you have a bytes object and want to store it in a JSON file, then you should first decode the byte object because JSON only has a few data types and raw byte data isn't one of them. It has arrays, decimal numbers, strings, and objects.
To decode a byte object you first have to know its encoding. For this, you can use
import chardet
encoding = chardet.detect(your_byte_object)['encoding']
then you can save this object to your json file like this
data = {"data": your_byte_object.decode(encoding)}
with open('request.txt', 'w') as file:
json.dump(data, file)
The most simple solution is to use the json function that comes with http request.
For example:

How do I decode unicode characters via python?

I am trying to import the following json file using python:
The file is called new_json.json:
{
"nextForwardToken": "f/3208873243596875673623625618474139659",
"events": [
{
"ingestionTime": 1045619,
"timestamp": 1909000,
"message": "2 32823453119 eni-889995t1 54.25.64.23 156.43.12.120 3389 23 6 342 24908 143234809 983246 ACCEPT OK"
}]
}
I have the following code to read the json file, and remove the unicode characters:
JSON_FILE = "new_json.json"
with open(JSON_FILE) as infile:
print infile
print '\n type of infile is \n', infile
data = json.load(infile)
str_data = str(data) # convert to string to remove unicode characters
wo_unicode = str_data.decode('unicode_escape').encode('ascii','ignore')
print 'unicode characters have been removed \n'
print wo_unicode
But print wo_unicode still prints with the unicode characters (i.e.u) in it.
The unicode characters cause a problem when trying to treat the json as a dictionary:
for item in data:
iden = item.get['nextForwardToken']
...results in an error:
AttributeError: 'unicode' object has no attribute 'get'
This has to work in Python2.7. Is there an easy way around this?
The error has nothing to do with unicode, you are trying to treat the keys as dicts, just use data to get 'nextForwardToken':
print data.get('nextForwardToken')
When you iterate over data, you are iterating over the keys so 'nextForwardToken'.get('nextForwardToken'), "events".get('nextForwardToken') etc.. are obviously not going to work even with the correct syntax.
Whether you access by data.get(u'nextForwardToken') or data.get('nextForwardToken'), both will return the value for the key:
In [9]: 'nextForwardToken' == u'nextForwardToken'
Out[9]: True
In [10]: data[u'nextForwardToken']
Out[10]: u'f/3208873243596875673623625618474139659'
In [11]: data['nextForwardToken']
Out[11]: u'f/3208873243596875673623625618474139659'
This code will give you the values as str without the unicode
import json
JSON_FILE = "/tmp/json.json"
with open(JSON_FILE) as infile:
print infile
print '\n type of infile is \n', infile
data = json.load(infile)
print data
str_data = json.dumps(data)
print str_data

Categories