How to remove all accents from json file in python [duplicate] - python

This question already has answers here:
What is the best way to remove accents (normalize) in a Python unicode string?
(13 answers)
Closed 2 years ago.
I'm importing a json file in python, but the file is full with accent characters in the city name (from portuguese language) and I need to somehow remove then from this file to further use.
For example, the words 'São Paulo', 'Santo André' and 'Foz do Iguaçu' should become in the json: Sao Paulo, Santo Andre and Foz do Iguacu.
{ "type": "FeatureCollection", "features": [
{ "type": "Feature", "properties": {"id": "1100015", "name": "São Paulo", "description": "Alta Floresta D'Oeste"}, "geometry": { "type": "Polygon", "coordinates": [-62.1820888570, -11.8668597878] }},
{ "type": "Feature", "properties": {"id": "1100023", "name": "Santo André", "description": "Ariquemes"}, "geometry": { "type": "Polygon", "coordinates": [-62.5359497334, -9.7318235272] }},
{ "type": "Feature", "properties": {"id": "1100031", "name": "Foz do Iguaçu", "description": "Cabixi"}, "geometry": { "type": "Polygon", "coordinates": [-60.3993982597, -13.4558418276] }}
}

Use unidecode :)
import unidecode
import json
places_json = '''
{ "type": "FeatureCollection",
"features": [
{ "type": "Feature", "properties": {"id": "1100015", "name": "São Paulo", "description": "Alta Floresta D'Oeste"}, "geometry": { "type": "Polygon", "coordinates": [-62.1820888570, -11.8668597878] }},
{ "type": "Feature", "properties": {"id": "1100023", "name": "Santo André", "description": "Ariquemes"}, "geometry": { "type": "Polygon", "coordinates": [-62.5359497334, -9.7318235272] }},
{ "type": "Feature", "properties": {"id": "1100031", "name": "Foz do Iguaçu", "description": "Cabixi"}, "geometry": { "type": "Polygon", "coordinates": [-60.3993982597, -13.4558418276] }}
]
}
'''
json_dec = unidecode.unidecode(places_json)
print(json.loads(json_dec))

#alexander-riedel has the right idea, but I think the wrong implementation because you have json, and you shouldn't convert the whole thing to a string.
Instead loop through the keys, converting them individually. It looks like it's only names that need converting, so you can do:
from unidecode import unidecode
data = { "type": "FeatureCollection", "features": [
{ "type": "Feature", "properties": {"id": "1100015", "name": "São Paulo", "description": "Alta Floresta D'Oeste"}, "geometry": { "type": "Polygon", "coordinates": [-62.1820888570, -11.8668597878] }},
{ "type": "Feature", "properties": {"id": "1100023", "name": "Santo André", "description": "Ariquemes"}, "geometry": { "type": "Polygon", "coordinates": [-62.5359497334, -9.7318235272] }},
{ "type": "Feature", "properties": {"id": "1100031", "name": "Foz do Iguaçu", "description": "Cabixi"}, "geometry": { "type": "Polygon", "coordinates": [-60.3993982597, -13.4558418276] }}
}
# modify in place
for feature in data["features"]:
feature["properties"]["name"] = unidecode(feature["properties"]["name"])

Related

Change start, end lineString coordinate order in geoJson file

I have a .geojson file with many lineStrings with the location of transects used to monitor shoreline change, so each transect runs across a land/water boundary. When visualized as is, the transects originate offshore and end onshore. For an analysis tool I am using, I need the locations to be swapped: the first coordinate needs to start on land and end offshore. I will have many thousands of these transects to change and want to make sure I'm doing it correctly but can't seem to figure out this very simple task (sorry, I am new here). I am working in python and earth engine.
# original
{
"type": "FeatureCollection",
"name": "EastChukci_small_testArea",
"crs": { "type": "name", "properties": { "name": "urn:ogc:def:crs:EPSG::3857" } },
"features": [
{ "type": "Feature", "properties": { "name": 2722 }, "geometry": { "type": "LineString", "coordinates": [ [ -17592698.71288351342082, 11344741.029055444523692 ], [ -17592054.347651835530996, 11343198.733621645718813 ] ] } },
{ "type": "Feature", "properties": { "name": 2723 }, "geometry": { "type": "LineString", "coordinates": [ [ -17592838.831736516207457, 11344682.393273767083883 ], [ -17592194.440066188573837, 11343140.124529516324401 ] ] } },
{ "type": "Feature", "properties": { "name": 2724 }, "geometry": { "type": "LineString", "coordinates": [ [ -17592978.948162343353033, 11344623.755085829645395 ], [ -17592334.530055023729801, 11343081.513031836599112 ] ] } },
]
}
# desired
{
"type": "FeatureCollection",
"name": "EastChukci_small_testArea",
"crs": { "type": "name", "properties": { "name": "urn:ogc:def:crs:EPSG::3857" } },
"features": [
{ "type": "Feature", "properties": { "name": 2722 }, "geometry": { "type": "LineString", "coordinates": [[ -17592054.347651835530996, 11343198.733621645718813 ], [ -17592698.71288351342082, 11344741.029055444523692 ] ] } },
{ "type": "Feature", "properties": { "name": 2723 }, "geometry": { "type": "LineString", "coordinates": [ [ -17592194.440066188573837, 11343140.124529516324401 ], [ -17592838.831736516207457, 11344682.393273767083883 ] ] } },
{ "type": "Feature", "properties": { "name": 2724 }, "geometry": { "type": "LineString", "coordinates": [ [ -17592334.530055023729801, 11343081.513031836599112 ] ], [ -17592978.948162343353033, 11344623.755085829645395 ] ] } },
]
}
Thanks in advance.
To read and write a json file you can use json module. The code below should solve your problem, but the downside of it is that it loads the whole file to the memory at once.
import json
with open('data.json', 'r') as json_file:
data = json.load(json_file)
for feature_data in data['features']:
feature_data['geometry']['coordinates'].reverse()
with open('data.json', 'w') as json_file:
json.dump(data, json_file)

How to write a json web response to a csv file in python?

Here is a schema of the json output which I am trying to parse and write specific fields from it into a csv file (Example: cve id, description,....)
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "JSON Schema for NVD Vulnerability Data Feed version 1.1",
"id": "https://scap.nist.gov/schema/nvd/feed/1.1/nvd_cve_feed_json_1.1.schema",
"definitions": {
"def_cpe_name": {
"description": "CPE name",
"type": "object",
"properties": {
"cpe22Uri": {
"type": "string"
},
"cpe23Uri": {
"type": "string"
},
"lastModifiedDate": {
"type": "string"
}
},
"required": [
"cpe23Uri"
]
},
"def_cpe_match": {
"description": "CPE match string or range",
"type": "object",
"properties": {
"vulnerable": {
"type": "boolean"
},
"cpe22Uri": {
"type": "string"
},
"cpe23Uri": {
"type": "string"
},
"versionStartExcluding": {
"type": "string"
},
"versionStartIncluding": {
"type": "string"
},
"versionEndExcluding": {
"type": "string"
},
"versionEndIncluding": {
"type": "string"
},
"cpe_name": {
"type": "array",
"items": {
"$ref": "#/definitions/def_cpe_name"
}
}
},
"required": [
"vulnerable",
"cpe23Uri"
]
},
"def_node": {
"description": "Defines a node or sub-node in an NVD applicability statement.",
"properties": {
"operator": {"type": "string"},
"negate": {"type": "boolean"},
"children": {
"type": "array",
"items": {"$ref": "#/definitions/def_node"}
},
"cpe_match": {
"type": "array",
"items": {"$ref": "#/definitions/def_cpe_match"}
}
}
},
"def_configurations": {
"description": "Defines the set of product configurations for a NVD applicability statement.",
"properties": {
"CVE_data_version": {"type": "string"},
"nodes": {
"type": "array",
"items": {"$ref": "#/definitions/def_node"}
}
},
"required": [
"CVE_data_version"
]
},
"def_subscore": {
"description": "CVSS subscore.",
"type": "number",
"minimum": 0,
"maximum": 10
},
"def_impact": {
"description": "Impact scores for a vulnerability as found on NVD.",
"type": "object",
"properties": {
"baseMetricV3": {
"description": "CVSS V3.x score.",
"type": "object",
"properties": {
"cvssV3": {"$ref": "cvss-v3.x.json"},
"exploitabilityScore": {"$ref": "#/definitions/def_subscore"},
"impactScore": {"$ref": "#/definitions/def_subscore"}
}
},
"baseMetricV2": {
"description": "CVSS V2.0 score.",
"type": "object",
"properties": {
"cvssV2": {"$ref": "cvss-v2.0.json"},
"severity": {"type": "string"},
"exploitabilityScore": {"$ref": "#/definitions/def_subscore"},
"impactScore": {"$ref": "#/definitions/def_subscore"},
"acInsufInfo": {"type": "boolean"},
"obtainAllPrivilege": {"type": "boolean"},
"obtainUserPrivilege": {"type": "boolean"},
"obtainOtherPrivilege": {"type": "boolean"},
"userInteractionRequired": {"type": "boolean"}
}
}
}
},
"def_cve_item": {
"description": "Defines a vulnerability in the NVD data feed.",
"properties": {
"cve": {"$ref": "CVE_JSON_4.0_min_1.1.schema"},
"configurations": {"$ref": "#/definitions/def_configurations"},
"impact": {"$ref": "#/definitions/def_impact"},
"publishedDate": {"type": "string"},
"lastModifiedDate": {"type": "string"}
},
"required": ["cve"]
}
},
"type": "object",
"properties": {
"CVE_data_type": {"type": "string"},
"CVE_data_format": {"type": "string"},
"CVE_data_version": {"type": "string"},
"CVE_data_numberOfCVEs": {
"description": "NVD adds number of CVE in this feed",
"type": "string"
},
"CVE_data_timestamp": {
"description": "NVD adds feed date timestamp",
"type": "string"
},
"CVE_Items": {
"description": "NVD feed array of CVE",
"type": "array",
"items": {"$ref": "#/definitions/def_cve_item"}
}
},
"required": [
"CVE_data_type",
"CVE_data_format",
"CVE_data_version",
"CVE_Items"
]
}
# -*- coding: utf-8 -*-
"""
Created on Thu Dec 3 17:08:51 2020
#author: Rajat Varshney
"""
import requests, json
api_url = 'https://services.nvd.nist.gov/rest/json/cve/1.0/'
cveid = input('Enter CVE ID: ')
api_call = requests.get(api_url+cveid)
print(api_call.content)
with open('cve details.txt', 'w') as outfile:
json.dump(api_call.content, outfile)

JSON compatibility with Mapbox Studio

I am trying to use a JSON file in Mapbox studio for network analysis but it gives me an error:
Input failed. "type" member required on line 1.
The representative sample of JSON file is:
"version": 0.6,
"generator": "Overpass API 0.7.56.2 b688b00f",
"osm3s": {
"timestamp_osm_base": "2020-03-27T11:58:01Z",
"copyright": "The data included in this document is from www.openstreetmap.org. The data is made available under ODbL."
},
"elements": [
{
"type": "node",
"id": 123458059,
"lat": -38.3344495,
"lon": 143.5394486
},
{
"type": "node",
"id": 123458066,
"lat": -38.3394461,
"lon": 143.5923655,
"tags": {
"crossing": "traffic_signals",
"highway": "traffic_signals"
}
},
{
"type": "way",
"id": 769574290,
"nodes": [
7183581936,
681081177,
1561328098,
1539139562,
448021781
],
"tags": {
"highway": "trunk",
"lanes": "2",
"maxspeed": "80",
"name": "Princes Highway",
"ref": "A1",
"source:maxspeed:sign": "mapillary"
}
},
{
"type": "way",
"id": 776227225,
"nodes": [
1017428185,
317738200
],
"tags": {
"alt_name": "Princes Highway",
"highway": "trunk",
"lanes": "2",
"maxspeed": "50",
"name": "Murray Street",
"ref": "A1",
"source:maxspeed:sign": "OpenStreetCam",
"source:name": "services.land.vic.gov.au"
}
}
]
}
Does the error occur because of the specification of the format? Do we need to reformat the features or types?
To upload data to Mapbox you will need to convert your JSON file to GeoJSON, a subset of the JSON format. For example:
{"type": "FeatureCollection",
"features": [
{
"geometry": {
"type": "Point",
"coordinates": [
-76.9750541388,
38.8410857803
]
},
"type": "Feature",
"properties": {
"description": "Southern Ave",
"marker-symbol": "rail-metro",
"title": "Southern Ave",
"url": "http://www.wmata.com/rider_tools/pids/showpid.cfm?station_id=107",
"lines": [
"Green"
],
"address": "1411 Southern Avenue, Temple Hills, MD 20748"
}
},
{
"geometry": {
"type": "Point",
"coordinates": [
-76.935256783,
38.9081784965
]
},
"type": "Feature",
"properties": {
"description": "Deanwood",
"marker-symbol": "rail-metro",
"title": "Deanwood",
"url": "http://www.wmata.com/rider_tools/pids/showpid.cfm?station_id=65",
"lines": [
"Orange"
],
"address": "4720 Minnesota Avenue NE, Washington, DC 20019"
}
}
]}
Mapbox accepts GeoJSON, you'll need to ask Overpass to return data as GeoJSON.

Replace multiple keys and values of JSON file in Python

For geojson type file named data as follows:
{
"type": "FeatureCollection",
"name": "entities",
"features": [{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbPolyline",
"EntityHandle": "1A0"
},
"geometry": {
"type": "LineString",
"coordinates": [
[3220.136443006845184, 3001.530372177397112],
[3847.34171007254281, 3000.86074447018018],
[3847.34171007254281, 2785.240077064262096],
[3260.34191304818205, 2785.240077064262096],
[3260.34191304818205, 2795.954148466309107]
]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbPolyline",
"EntityHandle": "1A4"
},
"geometry": {
"type": "LineString",
"coordinates": [
[3611.469650131302842, 2846.845982610575902],
[3695.231030111376185, 2846.845982610575902],
[3695.231030111376185, 2785.240077064262096],
[3611.469650131302842, 2785.240077064262096],
[3611.469650131302842, 2846.845982610575902]
]
}
}
]
}
I hope to realize the following manipulation to data:
replace key EntityHandle with Name;
replace EntityHandle's value from hex number with 'sf_001', 'sf_002', 'sf_003', etc;
replace type's value LineString with Polygon;
replace coordinates's two square brackets with three square brackets.
This is expected output:
{
"type": "FeatureCollection",
"name": "entities",
"features": [{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbPolyline",
"Name": "sf_001"
},
"geometry": {
"type": "Polygon",
"coordinates": [ [
[3220.136443006845184, 3001.530372177397112],
[3847.34171007254281, 3000.86074447018018],
[3847.34171007254281, 2785.240077064262096],
[3260.34191304818205, 2785.240077064262096],
[3260.34191304818205, 2795.954148466309107]
] ]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbPolyline",
"Name": "sf_002"
},
"geometry": {
"type": "Polygon",
"coordinates": [ [
[3611.469650131302842, 2846.845982610575902],
[3695.231030111376185, 2846.845982610575902],
[3695.231030111376185, 2785.240077064262096],
[3611.469650131302842, 2785.240077064262096],
[3611.469650131302842, 2846.845982610575902]
] ]
}
}
]
}
I'm new in JSON file manipulation using Python. Please help me, thanks at advance.
import json
from pprint import pprint
with open('data.geojson') as f:
data = json.load(f)
pprint(data)
for feature in data['features']:
#print(feature)
print(feature['properties']['EntityHandle'])
for feature in data['features']:
feature['properties']['EntityHandle'] = feature['properties']['Name'] #Rename `EntityHandle` to `Name`
del feature['properties']['EntityHandle']
Result:
Traceback (most recent call last):
File "<ipython-input-79-8b30b71fedf9>", line 2, in <module>
feature['properties']['EntityHandle'] = feature['properties']['Name']
KeyError: 'Name'
Give this a shot:
You can rename a key by using:
dict['NewKey'] = dict.pop('OldKey')
And to replace a value, it's just as simple as setting the new vale equal to the specific key:
dict['Key'] = new_value
Full Code:
data = {
"type": "FeatureCollection",
"name": "entities",
"features": [{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbPolyline",
"EntityHandle": "1A0"
},
"geometry": {
"type": "LineString",
"coordinates": [
[3220.136443006845184, 3001.530372177397112],
[3847.34171007254281, 3000.86074447018018],
[3847.34171007254281, 2785.240077064262096],
[3260.34191304818205, 2785.240077064262096],
[3260.34191304818205, 2795.954148466309107]
]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbPolyline",
"EntityHandle": "1A4"
},
"geometry": {
"type": "LineString",
"coordinates": [
[3611.469650131302842, 2846.845982610575902],
[3695.231030111376185, 2846.845982610575902],
[3695.231030111376185, 2785.240077064262096],
[3611.469650131302842, 2785.240077064262096],
[3611.469650131302842, 2846.845982610575902]
]
}
}
]
}
sf_count = 0
for feature in data['features']:
feature['properties']['Name'] = feature['properties'].pop('EntityHandle') #Rename `EntityHandle` to `Name`
sf_count += 1
feature['properties']['Name'] = 'sf_%.3d' %sf_count # Replace hex with incrimental sf_xxx
feature['geometry']['type'] = 'Polygon' # Replace `LineString` to `Polygon`
feature['geometry']['coordinates'] = [[ each for each in feature['geometry']['coordinates'] ]]

Invalid GeoJson (Data was not JSON serializable)

I am fairly new to the geojson spec...and formatting is causing havok.
All I am trying to do is build a new "features" list (for Points only) which I am adding new 'properties' for. Then I write it into (test.json).
Right now I am returning which has intermittent cases of "type": "FeatureCollection" (I was only expecting to see it once - at the top of the file) and some bad } syntax errors:
{"type": "FeatureCollection",
"features": [
{"geometry": {
"type": "Point",
"coordinates": [-122.3447075, 47.6821492]},
"type": "Feature",
"properties": {
"marker-color": "#808080",
"timestamp": "2013-08-17T22:41:18Z",
"version": 3,
"user": "seattlefyi",
"last_updated": "over a year ago",
"id": 427307160,
"marker-size": "small"
}}, ## what??
]} ## what??
{"type": "FeatureCollection",
"features": [
{"geometry": {
"type": "Point",
"coordinates": [-122.3447075, 47.6821492]},
"type": "Feature",
"properties": {
"marker-color": "#808080",
"timestamp": "2013-08-17T22:41:18Z",
"version": 3,
"user": "seattlefyi",
"last_updated": "over a year ago",
"id": 427307160,
"marker-size": "small"
}}, ## what...no "type": "FeatureCollection" on this one?
{"geometry": {
"type": "Point",
"coordinates": [-122.377932, 47.5641566]},
"type": "Feature",
"properties": {
"marker-color": "#808080",
"timestamp": "2009-07-11T04:04:51Z",
"version": 1,
"user": "Rob Lanphier",
"last_updated": "over a year ago",
"id": 439976119,
"marker-size": "small"
}
}
]
}
However, I'm trying to return
{"type": "FeatureCollection",
"features": [
{"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [-122.3447075, 47.6821492]},
"properties": {
"marker-color": "#808080",
"timestamp": "2013-08-17T22:41:18Z",
"version": 3,
"user": "ralph",
"last_updated": "over a year ago",
"id": 427307160,
"marker-size": "small"
}},
{"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [-122.377932, 47.5641566]},
"properties": {
"marker-color": "#808080",
"timestamp": "2009-07-11T04:04:51Z",
"version": 1,
"user": "Rob Lanphier",
"last_updated": "over a year ago",
"id": 439976119,
"marker-size": "small"
}
}
]
}
Code is:
def write_to_features(source, class_time, color):
""" write the json into geojson
takes all the items from one node ("lat", "lon", "id", "user", "tags", "timestamp")
writes all the items with new tags "last_updated","marker-color","marker-size"
returns a dict
"""
pt = {
"type": "Feature",
"geometry": {
"type": 'Point',
"coordinates": [float(source['lon']), float(source['lat'])]
},
"properties": {
"user": source['user'],
"id": source['id'],
"version": source['version'],
"timestamp": source['timestamp'],
"last_updated": classified_time,
"marker-color": marker_color,
"marker-size": "small"
}
}
return pt
def __main__():
geojson = { "type": "FeatureCollection", "features": [] }
outfile = r'.\test.json'
with open(outfile, 'w') as geojson_file:
for item in all_data_dict['elements']:
point_dict = write_to_features(item, data_w_update, data_item_color)
geojson['features'].append(point_dict)
json.dump(geojson, geojson_file)
Shouldn't your json.dump(geojson, geojson_file) be outside your loop?you append in the line above it..so I'm questioning why would you dump/write to the file multiple times?I would think you should only be calling json.dump once.

Categories