Which serialization format is this? - python

I’m dealing with some serialized data fetched from an SQL Server database and that looks like this :
('|AFoo|BBaar|C61|DFoo Baar|E200060|F200523|G200240|', )
Any idea which format is this ? And is there any Python package that can deserilize this ?

What you show is a tuple that contains one value - a string. You can use string.split to construct a list of the string's component parts - i.e., AFoo, BBaar etc
t = ('|AFoo|BBaar|C61|DFoo Baar|E200060|F200523|G200240|', )
for e in t:
values = [v for v in e.split('|') if v]
print(values)
Output:
['AFoo', 'BBaar', 'C61', 'DFoo Baar', 'E200060', 'F200523', 'G200240']
Note:
The for loop is used as a generic approach that allows for multiple strings in the tuple. For the data fragment shown in the question, this isn't actually necessary

Related

Python 2.7 parsing data

I have data that look like this:
data = 'somekey:value4thekey&second-key:valu3-can.be?anything&third_k3y:it%can have spaces;too'
In a nice human-readable way it would look like this:
somekey : value4thekey
second-key : valu3-can.be?anything
third_k3y : it%can have spaces;too
How should I parse the data so when I do data['somekey'] I would get >>> value4thekey?
Note: The & is connecting all of the different items
How am I currently tackling with it
Currently, I use this ugly solution:
all = data.split('&')
for i in all:
if i.startswith('somekey'):
print i
This solution is very bad due to multiple obvious limitations. It would be much better if I can somehow parse it into a python tree object.
I'd split the string by & to get a list of key-value strings, and then split each such string by : to get key-value pairs. Using dict and list comprehensions actually makes this quite elegant:
result = {k:v for k, v in (part.split(':') for part in data.split('&'))}
You can parse your data directly to a dictionary - split on the item separator & then split again on the key,value separator ::
table = {
key: value for key, value in
(item.split(':') for item in data.split('&'))
}
This allows you direct access to elements, e.g. as table['somekey'].
If you don't have objects within a value, you can parse it to a dictionary
structure = {}
for ele in data.split('&'):
ele_split = ele.split(':')
structure[ele_split[0]] = ele_split[1]
You can now use structure to get the values:
print structure["somekey"]
#returns "value4thekey"
Since the keys have a common format of being in the form of "key":"value".
You can use it as a parameter to split on.
for i in x.split("&"):
print(i.split(":"))
This would generate an array of even items where every even index is the key and odd index being the value. Iterate through the array and load it into a dictionary. You should be good!
I'd format data to YAML and parse the YAML
import re
import yaml
data = 'somekey:value4thekey&second-key:valu3-can.be?anything&third_k3y:it%can have spaces;too'
yaml_data = re.sub('[:]', ': ', re.sub('[&]', '\n', data ))
y = yaml.load(yaml_data)
for k in y:
print "%s : %s" % (k,y[k])
Here's the output:
third_k3y : it%can have spaces;too
somekey : value4thekey
second-key : valu3-can.be?anything

Converting a dataframe into JSON (in pyspark) and then selecting desired fields

I'm new to Spark. I have a dataframe that contains the results of some analysis. I converted that dataframe into JSON so I could display it in a Flask App:
results = result.toJSON().collect()
An example entry in my json file is below. I then tried to run a for loop in order to get specific results:
{"userId":"1","systemId":"30","title":"interest"}
for i in results:
print i["userId"]
This doesn't work at all and I get errors such as: Python (json) : TypeError: expected string or buffer
I used json.dumps and json.loads and still nothing - I keep on getting errors such as string indices must be integers, as well as the above error.
I then tried this:
print i[0]
This gave me the first character in the json "{" instead of the first line. I don't really know what to do, can anyone tell me where I'm going wrong?
Many Thanks.
If the result of result.toJSON().collect() is a JSON encoded string, then you would use json.loads() to convert it to a dict. The issue you're running into is that when you iterate a dict with a for loop, you're given the keys of the dict. In your for loop, you're treating the key as if it's a dict, when in fact it is just a string. Try this:
# toJSON() turns each row of the DataFrame into a JSON string
# calling first() on the result will fetch the first row.
results = json.loads(result.toJSON().first())
for key in results:
print results[key]
# To decode the entire DataFrame iterate over the result
# of toJSON()
def print_rows(row):
data = json.loads(row)
for key in data:
print "{key}:{value}".format(key=key, value=data[key])
results = result.toJSON()
results.foreach(print_rows)
EDIT: The issue is that collect returns a list, not a dict. I've updated the code. Always read the docs.
collect() Return a list that contains all of the elements in this RDD.
Note This method should only be used if the resulting array is
expected to be small, as all the data is loaded into the driver’s
memory.
EDIT2: I can't emphasize enough, always read the docs.
EDIT3: Look here.
import json
>>> df = sqlContext.read.table("n1")
>>> df.show()
+-----+-------+----+---------------+-------+----+
| c1| c2| c3| c4| c5| c6|
+-----+-------+----+---------------+-------+----+
|00001|Content| 1|Content-article| |2018|
|00002|Content|null|Content-article|Content|2015|
+-----+-------+----+---------------+-------+----+
>>> results = df.toJSON().map(lambda j: json.loads(j)).collect()
>>> for i in results: print i["c1"], i["c6"]
...
00001 2018
00002 2015
Here is what worked for me:
df_json = df.toJSON()
for row in df_json.collect():
#json string
print(row)
#json object
line = json.loads(row)
print(line[some_key])
Keep in mind that using .collect() is not advisable, since it collects the distributed data frames, and defeats the purpose of using data frames.
To get an array of python dicts:
results = df.toJSON().map(json.loads).collect()
To get an array of JSON strings:
results = df.toJSON().collect()
To get a JSON string (i.e. a JSON string of an array):
results = df.toPandas().to_json(orient='records')
and using that to get an array of Python dicts:
results = json.loads(df.toPandas().to_json(orient='records'))

extract a dictionary key value from a string

I am currently in the process of using python to transmit a python dictionary from one raspberry pi to another over a 433Mhz link, using virtual wire (vw.py) to send data.
The issue with vw.py is that data being sent is in string format.
I am successfully receiving the data on PI_no2, and now I am trying to reformat the data so it can be placed back in a dictionary.
I have created a small snippet to test with, and created a temporary string in the same format it is received as from vw.py
So far I have successfully split the string at the colon, and I am now trying to get rid of the double quotes, without much success.
my_status = {}
#temp is in the format the data is recieved
temp = "'mycode':['1','2','firstname','Lastname']"
key,value = temp.split(':')
print key
print value
key = key.replace("'",'')
value = value.replace("'",'')
my_status.update({key:value})
print my_status
Gives the result
'mycode'
['1','2','firstname','Lastname']
{'mycode': '[1,2,firstname,Lastname]'}
I require the value to be in the format
['1','2','firstname','Lastname']
but the strip gets rid of all the single speech marks.
You can use ast.literal_eval
import ast
temp = "'mycode':['1','2','firstname','Lastname']"
key,value = map(ast.literal_eval, temp.split(':'))
status = {key: value}
Will output
{'mycode': ['1', '2', 'firstname', 'Lastname']}
This shouldn't be hard to solve. What you need to do is strip away the [ ] in your list string, then split by ,. Once you've done this, iterate over the elements are add them to a list. Your code should look like this:
string = "[1,2,firstname,lastname]"
string = string.strip("[")
string = string.strip("]")
values = string.split(",")
final_list = []
for val in values:
final_list.append(val)
print final_list
This will return:
> ['1','2','firstname','lastname']
Then take this list and insert it into your dictionary:
d = {}
d['mycode'] = final_list
The advantage of this method is that you can handle each value independently. If you need to convert 1 and 2 to int then you'll be able to do that while leaving the other two as str.
Alternatively to cricket_007's suggestion of using a syntax tree parser - you're format is very similar to the standard yaml format. This is a pretty lightweight and intutive framework so I'll suggest it
a = "'mycode':['1','2','firstname','Lastname']"
print yaml.load(a.replace(":",": "))
# prints the dictionary {'mycode': ['1', '2', 'firstname', 'Lastname']}
The only thing that's different between your format and yaml is the colon needs a space
It also will distinguish between primitive data types for you, if that's important. Drop the quotes around 1 and 2 and it determines that they're numerical.
Tadhg McDonald-Jensen suggested pickling in the comments. This will allow you to store more complicated objects, though you may lose the human-readable format you've been experimenting with

how to create a dictionary from a set of properly formatted tuples in python

Is there a simple way to create a dictionary from a list of formatted tuples. e.g. if I do something like:
d={"responseStatus":"SUCCESS","sessionId":"01234","userId":2000004904}
This creates a dictionary called d. However, if I want to create a dictionary from a string which contains the same string, I can't do that
res=<some command that returns {"responseStatus":"SUCCESS","sessionId":"01234","userId":2000004904}>
print res
# returns {"responseStatus":"SUCCESS","sessionId":"01234","userId":2000004904}
d=dict(res)
This throws an error that says:
ValueError: dictionary update sequence element #0 has length 1; 2 is required
I strongly strongly suspect that you have json on your hands.
import json
d = json.loads('{"responseStatus":"SUCCESS","sessionId":"01234","userId":2000004904}')
would give you what you want.
Use dict(zip(tuples))
>>> u = ("foo", "bar")
>>> v = ("blah", "zoop")
>>> d = dict(zip(u, v))
>>> d
{'foo': 'blah', 'bar': 'zoop'}
Note, if you have an odd number of tuples this will not work.
Based on what you gave is, res is
# returns {"responseStatus":"SUCCESS","sessionId":"01234","userId":2000004904}
So the plan is to grab the string starting at the curly brace to the end and use json to decode it:
import json
# Discard the text before the curly brace
res = res[res.index('{'):]
# Turn that text into a dictionary
d = json.loads(res)
All you need to do in your particular case is
d = eval(res)
And please keep security in mind when using eval, especially if you're mixing it with ajax/json.
UPDATE
Since others pointed out you might be getting this data over the web and it isn't just a "how to make this work" question, use this:
import json
json.loads(res)

How to compare an element of a tuple (int) to determine if it exists in a list

I have the two following lists:
# List of tuples representing the index of resources and their unique properties
# Format of (ID,Name,Prefix)
resource_types=[('0','Group','0'),('1','User','1'),('2','Filter','2'),('3','Agent','3'),('4','Asset','4'),('5','Rule','5'),('6','KBase','6'),('7','Case','7'),('8','Note','8'),('9','Report','9'),('10','ArchivedReport',':'),('11','Scheduled Task',';'),('12','Profile','<'),('13','User Shared Accessible Group','='),('14','User Accessible Group','>'),('15','Database Table Schema','?'),('16','Unassigned Resources Group','#'),('17','File','A'),('18','Snapshot','B'),('19','Data Monitor','C'),('20','Viewer Configuration','D'),('21','Instrument','E'),('22','Dashboard','F'),('23','Destination','G'),('24','Active List','H'),('25','Virtual Root','I'),('26','Vulnerability','J'),('27','Search Group','K'),('28','Pattern','L'),('29','Zone','M'),('30','Asset Range','N'),('31','Asset Category','O'),('32','Partition','P'),('33','Active Channel','Q'),('34','Stage','R'),('35','Customer','S'),('36','Field','T'),('37','Field Set','U'),('38','Scanned Report','V'),('39','Location','W'),('40','Network','X'),('41','Focused Report','Y'),('42','Escalation Level','Z'),('43','Query','['),('44','Report Template ','\\'),('45','Session List',']'),('46','Trend','^'),('47','Package','_'),('48','RESERVED','`'),('49','PROJECT_TEMPLATE','a'),('50','Attachments','b'),('51','Query Viewer','c'),('52','Use Case','d'),('53','Integration Configuration','e'),('54','Integration Command f'),('55','Integration Target','g'),('56','Actor','h'),('57','Category Model','i'),('58','Permission','j')]
# This is a list of resource ID's that we do not want to reference directly, ever.
unwanted_resource_types=[0,1,3,10,11,12,13,14,15,16,18,20,21,23,25,27,28,32,35,38,41,47,48,49,50,57,58]
I'm attempting to compare the two in order to build a third list containing the 'Name' of each unique resource type that currently exists in unwanted_resource_types. e.g. The final result list should be:
result = ['Group','User','Agent','ArchivedReport','ScheduledTask','...','...']
I've tried the following that (I thought) should work:
result = []
for res in resource_types:
if res[0] in unwanted_resource_types:
result.append(res[1])
and when that failed to populate result I also tried:
result = []
for res in resource_types:
for type in unwanted_resource_types:
if res[0] == type:
result.append(res[1])
also to no avail. Is there something i'm missing? I believe this would be the right place to perform list comprehension, but that's still in my grey basket of understanding fully (The Python docs are a bit too succinct for me in this case).
I'm also open to completely rethinking this problem, but I do need to retain the list of tuples as it's used elsewhere in the script. Thank you for any assistance you may provide.
Your resource types are using strings, and your unwanted resources are using ints, so you'll need to do some conversion to make it work.
Try this:
result = []
for res in resource_types:
if int(res[0]) in unwanted_resource_types:
result.append(res[1])
or using a list comprehension:
result = [item[1] for item in resource_types if int(item[0]) in unwanted_resource_types]
The numbers in resource_types are numbers contained within strings, whereas the numbers in unwanted_resource_types are plain numbers, so your comparison is failing. This should work:
result = []
for res in resource_types:
if int( res[0] ) in unwanted_resource_types:
result.append(res[1])
The problem is that your triples contain strings and your unwanted resources contain numbers, change the data to
resource_types=[(0,'Group','0'), ...
or use int() to convert the strings to ints before comparison, and it should work. Your result can be computed with a list comprehension as in
result=[rt[1] for rt in resource_types if int(rt[0]) in unwanted_resource_types]
If you change ('0', ...) into (0, ... you can leave out the int() call.
Additionally, you may change the unwanted_resource_types variable into a set, like
unwanted_resource_types=set([0,1,3, ... ])
to improve speed (if speed is an issue, else it's unimportant).
The one-liner:
result = map(lambda x: dict(map(lambda a: (int(a[0]), a[1]), resource_types))[x], unwanted_resource_types)
without any explicit loop does the job.
Ok - you don't want to use this in production code - but it's fun. ;-)
Comment:
The inner dict(map(lambda a: (int(a[0]), a[1]), resource_types)) creates a dictionary from the input data:
{0: 'Group', 1: 'User', 2: 'Filter', 3: 'Agent', ...
The outer map chooses the names from the dictionary.

Categories