How to model a complex json file as a python class - python

Are there any python helper libraries I can use to create models that I can use to generate complex json files, such as this. I've read about colander but I'm not sure it does what I need. The tricky bit about the following is that the trigger-rule section may have nested match rules, something as described at https://github.com/adnanh/webhook/wiki/Hook-Rules
[
{
"id": "webhook",
"execute-command": "/home/adnan/redeploy-go-webhook.sh",
"command-working-directory": "/home/adnan/go",
"pass-arguments-to-command":
[
{
"source": "payload",
"name": "head_commit.id"
},
{
"source": "payload",
"name": "pusher.name"
},
{
"source": "payload",
"name": "pusher.email"
}
],
"trigger-rule":
{
"and":
[
{
"match":
{
"type": "payload-hash-sha1",
"secret": "mysecret",
"parameter":
{
"source": "header",
"name": "X-Hub-Signature"
}
}
},
{
"match":
{
"type": "value",
"value": "refs/heads/master",
"parameter":
{
"source": "payload",
"name": "ref"
}
}
}
]
}
}
]

Define a class like this:
class AttributeDictionary(dict):
__getattr__ = dict.__getitem__
__setattr__ = dict.__setitem__
When you load your JSON, pass AttributeDictionary as the object_hook:
import json
data = json.loads(json_str, object_hook=AttributeDictionary)
Then you can access dict entries by specifying the key as an attribute:
print data[0].id
Output
webhook
Note: You will want to replace dashes in keys with underscores. If you don't, this approach won't work on those keys.

Related

python: construct multipart/form data within defined in swagger.json

I am trying to construct a python request based on swagger.json schema. It mentioned multipart/form data and I did some research. And now the remaining issue is about type "array", not sure how to do it. Below is swagger.json schema.
"requestBody": {
"required": true,
"content": {
"multipart/form-data": {
"schema": {
"type": "object",
"properties": {
"id": {
"type": "string"
},
"name": {
"type": "string"
},
"file": {
"items": {
"type": "string",
"format": "binary"
},
"type": "array"
}
},
"required": [
"id",
"name",
"file"
]
}
}
}
}
I found files parameter in python requests module could do the multiform(How to send a "multipart/form-data" with requests in python?), but I don't know how to do the 'file' part which is an array here...if it is not array, just one object. I will go with 'file': ('testfile', open('testfile', 'rb')
current the UI side has not been deployed, so I cannot test. so could anyone help here? Thanks
data = {
'id' : test_id,
'name' : test_name,
'file': []
}

How to save polygon data on Elasticsearch through Django GeoShapeField inside NestedField?

The models looks like -
class Restaurant(models.Model):
zones = JSONField(default=dict)
The document looks like-
#registry.register_document
class RestaurantDocument(Document):
zone = fields.NestedField(properties={"slug": fields.KeywordField(), "polygon_zone": fields.GeoShapeField()})
class Index:
name = 'restaurant_data'
settings = {
'number_of_shards': 1,
'number_of_replicas': 0
}
class Django:
model = Restaurant
def prepare_zone(self, instance):
return instance.zone
After indexing the mapping looks like-
"zone": {
"type": "nested",
"properties": {
"polygon_zone": {
"type": "geo_shape"
},
"slug": {
"type": "keyword"
}
}
}
But when I am saving data on zones field by following structure-
[{"slug":"dhaka","ploygon_zone":{"type":"polygon","coordinates":[[[89.84207153320312,24.02827811169503],[89.78233337402344,23.93040645231774],[89.82833862304688,23.78722976367578],[90.02197265625,23.801051951752406],[90.11329650878905,23.872024546162947],[90.11672973632812,24.00883517846163],[89.84207153320312,24.02827811169503]]]}}]
Then the elasticsearch mapping has been changed automatically by the following way-
"zone": {
"type": "nested",
"properties": {
"ploygon_zone": {
"properties": {
"coordinates": {
"type": "float"
},
"type": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
},
"polygon_zone": {
"type": "geo_shape"
},
"slug": {
"type": "keyword"
}
}
}
That's why when I try to search on zone__polygon_zone field, it always returns empty because its not polygon type data.
So, how can I save polygon data on elasticsearch trough django by nested geoshape field?
There is a type while index the data. Instead of ploygon_zone, it should be polygon_zone. I believe fixing the typo will solve the issue that you are facing.

How to require data in the JSON list?

how does one require data in a list? How does one do this and specify the type e.g. dictionary?
Below are two JSON samples and a schema. Both JSON samples are valid according to the schema. The sample with the empty list should fail validation IMHO. How do I make that happen?
from jsonschema import validate
# this is ok per the schema
{
"mylist":[
{
"num_items":8,
"freq":8.5,
"other":2
},
{
"num_items":8,
"freq":8.5,
"other":4
}
]
}
# this should fail validation, but does not.
{
"mylist":[
]
}
# schema
{
"$schema": "http://json-schema.org/schema#",
"type": "object",
"properties": {
"mylist": {
"type": "array",
"items": {
"type": "object",
"properties": {
"num_items": {
"type": "integer"
},
"freq": {
"type": "number"
},
"other": {
"type": "integer"
}
},
"required": [
"freq",
"num_items",
"other"
]
}
}
},
"required": [
"mylist"
]
}

Python - Parse complex JSON with objectpath

i need parse terraform file, write in JSON format. I have to extract two data, resource and id, this is example file:
{
"version": 1,
"serial": 1,
"modules": [
{
"path": [
"root"
],
"outputs": {
},
"resources": {
"aws_security_group.vpc-xxxxxxx-test-1": {
"type": "aws_security_group",
"primary": {
"id": "sg-xxxxxxxxxxxxxx",
"attributes": {
"description": "test-1",
"name": "test-1"
}
}
},
"aws_security_group.vpc-xxxxxxx-test-2": {
"type": "aws_security_group",
"primary": {
"id": "sg-yyyyyyyyyyyy",
"attributes": {
"description": "test-2",
"name": "test-2"
}
}
}
}
}
]
}
I need export for any resources, the first key and value of id, in this case, aws_security_group.vpc-xxxxxxx-test-1 sg-xxxxxxxxxxxxxx and aws_security_group.vpc-xxxxxxx-test-2 sg-yyyyyyyyyyyy
I have tried to write this in python:
#!/usr/bin/python3.6
import json
import objectpath
with open('file.json') as json_file:
data = json.load(json_file)
json_tree = objectpath.Tree(data['modules'])
result = tuple(json_tree.execute('$..resources[0]'))
result is
('aws_security_group.vpc-xxxxxxx-test-1', 'aws_security_group.vpc-xxxxxxx-test-2')
It's'ok but I can't extract the id, any help is appreciated, also use other methods
Thanks
I don't know objectpath, but I think you need:
tree.execute('$..resources[0]..primary.id')
or even just
tree.execute('$..resources[0]..id')

How to modify nested JSON with python

I need to update (CRUD) a nested JSON file using Python. To be able to call python function(s)(to update/delete/create) entires and write it back to the json file.
Here is a sample file.
I am looking at the remap library but not sure if this will work.
{
"groups": [
{
"name": "group1",
"properties": [
{
"name": "Test-Key-String",
"value": {
"type": "String",
"encoding": "utf-8",
"data": "value1"
}
},
{
"name": "Test-Key-Integer",
"value": {
"type": "Integer",
"data": 1000
}
}
],
"groups": [
{
"name": "group-child",
"properties": [
{
"name": "Test-Key-String",
"value": {
"type": "String",
"encoding": "utf-8",
"data": "value1"
}
},
{
"name": "Test-Key-Integer",
"value": {
"type": "Integer",
"data": 1000
}
}
]
}
]
},
{
"name": "group2",
"properties": [
{
"name": "Test-Key2-String",
"value": {
"type": "String",
"encoding": "utf-8",
"data": "value2"
}
}
]
}
]
}
I feel like I'm missing something in your question. In any event, what I understand is that you want to read a json file, edit the data as a python object, then write it back out with the updated data?
Read the json file:
import json
with open("data.json") as f:
data = json.load(f)
That creates a dictionary (given the format you've given) that you can manipulate however you want. Assuming you want to write it out:
with open("data.json","w") as f:
json.dump(data,f)

Categories