JSON from (RIOT) API Formatted Incorrectly - python

I am importing JSON data into Python from an API and ran into the following decode error:
JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
Looking at online examples its immediately clear my JSON data has ' where others have have ".
Ideally, I'd like to know why it's being downloaded in this way. It seems highly likely its an error on my end, not theirs.
I decided it should be easy to correct the JSON format but I have failed here too. Please see the below code for how I obtain the JSON data and my attempt at fixing it.
#----------------------------------------
#---Read this only if you want to download
#---the data yourself.
#----------------------------------------
#Built from 'Towards Data Science' guide
#https://towardsdatascience.com/how-to-use-riot-api-with-python-b93be82dbbd6
#Must first have installed riotwatcher
#Info in my example is made up, I can't supply a real API code or
#I get in trouble. Sorry about this. You could obtain one from their website
#but this would be a lot of faff for what is probably a simple StackOverflow
#question
#If you were to get/have a key you could use the following information:
#<EUW> for region
#<Agurin> for name
#----------------------------------------
#---Code
#----------------------------------------
#--->Set Variables
#Get installed riotwatcher module for
#Python
import riotwatcher
#Import riotwatcher tools.
from riotwatcher import LolWatcher, ApiError
#Import JSON (to read the JSON API file)
import json
# Global variables
# Get new API from
# https://developer.riotgames.com/
api_key = 'RGAPI-XXXXXXX-XXX-XXXX-XXXX-XXXX'
watcher = LolWatcher(api_key)
my_region = 'MiddleEarth'
#need to give path to where records
#are to be stored
records_dir = "/home/solebaysharp/Projects/Riot API/Records"
#--->Obtain initial data, setup new varaibles
#Use 'watcher' to get basic stats and setup my account as a variable (me)
me = watcher.summoner.by_name(my_region, "SolebaySharp")
# Setup retrieval of recent match info
my_matches = watcher.match.matchlist_by_account(my_region, me["accountId"])
print(my_matches)
#--->Download the recent match data
#Define where the JSON data is going to go
recent_matches_index_json = (records_dir + "/recent_matches_index.json")
#get that JSON data
print ("Downloading recent match history data")
file_handle = open(recent_matches_index_json,"w+")
file_handle.write(str(my_matches))
file_handle.close()
#convert it to python
file_handle = open(recent_matches_index_json,)
recent_matches_index = json.load(file_handle)
Except this giver the following error...
JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
So instead to correct this I tried:
file_handle = open(recent_matches_index_json)
json_sanitised = json.loads(file_handle.replace("'", '"'))
This returns...
AttributeError: '_io.TextIOWrapper' object has no attribute 'replace'
For the sake of completeness, beneath is a sample of what the JSON looks like. I have added the paragraphs to enhance readability. It does not come this way.
{'matches': [
{'platformId': 'NA1',
'gameId': 5687555181,
'champion': 235,
'queue': 400,
'season': 13,
'timestamp': 1598243995076,
'role': 'DUO_SUPPORT',
'lane': 'BOTTOM'
},
{'platformId': 'NA1',
'gameId': 4965733458,
'champion': 235,
'queue': 400,
'season': 13,
'timestamp': 1598240780841,
'role': 'DUO_SUPPORT',
'lane': 'BOTTOM'
},
{'platformId': 'NA1',
'gameId': 4583215645,
'champion': 111,
'queue': 400,
'season': 13,
'timestamp': 1598236666162,
'role': 'DUO_SUPPORT',
'lane': 'BOTTOM'
}],
'startIndex': 0,
'endIndex': 100,
'totalGames': 186}

This is occurring because Python is converting the JSON to a string (str).
file_handle.write(str(my_matches))
As it doesn't see the difference between ' and " it just goes with its default of '.
We can stop this from happening by using JSON.dumps. This partially (certainly not fully) answers the second part of the question as well, as we're using a correctly formatted JSON command.
The above-mentioned line simply has to be replaced with this:
file_handle.write(json.dumps(my_matches))
This will preserve the JSON formatting.

Related

Python 'list' object has no attribute 'keys' when trying to write a row in CSV file

I am trying to write a new row into a CSV file and I can't because I get an error in Python Shell.
Below is the code I am using (I am reading JSON from API and want to put data into CSV file)
# import urllib library
from urllib.request import Request, urlopen
c=1
# import json
import json
# store the URL in url as
# parameter for urlopen
import pandas as pd
import csv
headerList = ['name','id','order','height','weight','speed','special_defense','special_attack','defense','attack','hp']
# open CSV file and assign header
with open("pokemon_stats.csv", 'w') as file:
dw = csv.DictWriter(file, delimiter=',',
fieldnames=headerList)
dw.writeheader()
# display csv file
fileContent = pd.read_csv("pokemon_stats.csv")
for r in range(1,3):
req = Request('https://pokeapi.co/api/v2/pokemon/'+str(r)+'/', headers={'User-Agent': 'Chrome/32.0.1667.0'})
# store the response of URL
response = urlopen(req)
# storing the JSON response
# from url in data
data_json = json.loads(response.read())
#print(data_json)
for key, value in data_json.items():
if key=='name':
name=value
elif key=='id':
id=value
elif key=='order':
order=value
elif key=='height':
height=value
elif key=='weight':
weight=value
elif key == 'stats':
for sub in data_json['stats']:
for i in sub:
if i=='base_stat':
base_stat=sub[i]
if i=='stat':
for j in sub[i]:
if j=='name':
stat_name=sub[i][j]
if stat_name=='hp':
hp=base_stat
elif stat_name=='attack':
attack=base_stat
elif stat_name=='defense':
defense=base_stat
elif stat_name=='special-attack':
special_attack=base_stat
elif stat_name=='special-defense':
special_defense=base_stat
elif stat_name=='speed':
speed=base_stat
data = [name,id,order,height,weight,speed,special_defense,special_attack,defense,attack,hp]
dw.writerow(data)
After I try the execution of this code I get an error as it follows:
Traceback (most recent call last):
File "C:/Users/sbelcic/Desktop/NANOBIT_API.py", line 117, in <module>
dw.writerow(data)
File "C:\Users\sbelcic\AppData\Local\Programs\Python\Python37\lib\csv.py", line 155, in writerow
return self.writer.writerow(self._dict_to_list(rowdict))
File "C:\Users\sbelcic\AppData\Local\Programs\Python\Python37\lib\csv.py", line 148, in _dict_to_list
wrong_fields = rowdict.keys() - self.fieldnames
AttributeError: 'list' object has no attribute 'keys'*
Can somebody pls help and tell me what I am doing wrong.
I don't have working experience of manipulating JSON response with Python so any comments are welcome. If someone sees a better way to do this he is welcome to share.
Since dw is a DictionaryWriter, data needs to be a dictionary (currently it's a list) as seen in the documentation.
Convert data to a dictionary with your headers
data = [name,id,order,height,weight,speed,special_defense,special_attack,defense,attack,hp]
data = dict(zip(headerList, data))
dw.writerow(data)
Check the example for using the DictWriter. You need to pass a dictionary to writerow instead of a list, so your last line should be
data =['name':name,'id': id,'order':order,'height': height,'weight':weight,'speed':speed,'special_defense':special_defense,'special_attack':special_attack,'defense':defense,'attack':attack,'hp':hp]
dw.writerow(data)
Note that your whole code can also be simplified if you populate the data dictionary instead of all your if/else:
data={} #empty dictionary
#First extract everything that is on the main level of your dict
for key in ("name", "id", "order", "height", "weight":
if key in data_json:
data[key]=data_json[key]
#Check if the "stats" dict exists in your JSON data
if 'stats' in data_json:
if 'base_stat' in data_json['stats']:
data['base_stat']=data_json['stats']['base_stat']
if 'stat' in data_json['stats']:
statDict = data_json['stats']['stat']
for key in ['hp', 'attack', 'defense', 'special-attack', 'special-defense', 'speed']:
if key in statDict:
data[key]=statDict[key]
Notes:
I did not test this code, check it carefully, but I hope you get the idea
You could add else to all if key in checks to include an error message if a stat is missing
If you are sure that all keys will always be present, then you can skip a few of the if checks
I'm going to ignore the actual error that got you here, and instead propose a radical restructure: I think your code will be simpler and easier to reason about.
I've looked at the JSON returned from that Pokemon API and I can see why you started down the path you did: there's a lot of data, and you only need a small subset of it. So, you're going through a lot of effort to pick out exactly what you want.
The DictWriter interface can really help you here. Consider this really small example:
header = ['name', 'id', 'order']
with open('output.csv', 'w', newline='') as f:
writer = csv.DictWriter(f, fieldnames=header)
writer.writeheader()
writer.writerow({'name': 'bulbasaur', 'id': 1, 'order': 1, 'species': {}})
Maybe you've run something like this before and got this error:
ValueError: dict contains fields not in fieldnames: 'species'
because the JSON you loaded has keys you didn't include when you created your writer... because you don't want them. And then, maybe you figured, "well, that means I've got to be very selective about what I put in the dict before passing to writerow()?
Since you've already defined which keys you care about for the header, use those keys to pull out what you want from the JSON:
header = ['name', 'id', 'order', 'height', 'weight',
'speed', 'special-defense', 'special-attack',
'defense', 'attack', 'hp']
all_data = json.load(open('1.json')) # bulbasaur, I downloaded this from the API URL
my_data = {}
for key in header:
my_data[key] = all_data.get(key) # will return None for sub-stats keys, which is okay for now
writer = csv.DictWriter(sys.stdout, fieldnames=header)
writer.writeheader()
writer.writerow(my_data)
The get(key_name) method on a dict (the JSON data) will try to find that key in the dict and return that key's value. If the key isn't found, None is returned. Running that I get the following CSV (the sub-stat columns are empty, as expected):
name,id,order,height,weight,speed,special_defense,special_attack,defense,attack,hp
bulbasaur,1,1,7,69,,,,,,
This has the same effect as your "if this key, then this value" statements, but it's driven by the data (header names) you already defined.
On to the sub-stats...
I think it's safe to assume that if there is a stats key in the JSON, each "stat object" in the list of stats will have the data you want. It's important to make sure you're only copying the stats you've specified in header; and again, you can use your data to drive the process:
for stat in all_data['stats']:
stat_name = stat['stat']['name']
if stat_name not in header:
continue # skip this sub-stat, no column for it in the CSV
base_stat = stat['base_stat']
my_data[stat_name] = base_stat
When I insert that loop, I now get this for my CSV output:
name,id,order,height,weight,speed,special_defense,special_attack,defense,attack,hp
bulbasaur,1,1,7,69,45,,,49,49,45
Some stats are populated, but some, the "special" stats are blank? That's because in your header you've named them like special_attack (with underscore) but in reality they're like special-attack (with hyphen). I fixed your header, and now I get:
name,id,order,height,weight,speed,special-defense,special-attack,defense,attack,hp
bulbasaur,1,1,7,69,45,65,65,49,49,45
Those are all the pieces you need. To put it together, I recommend the following structure... I'm a big fan of breaking up a process like this into distinct tasks: get all the data, then process all the data, then write all the processed data. It makes debugging easier, and less indentation of code:
# Make all API calls and record their JSON
all_datas = []
# loop over your API calls:
# make the request
# get the JSON data
# append JSON data to all_datas
# Process/transform the API JSON into what you want
my_data_rows = []
for all_data in all_datas:
my_data_row = {}
for key in header:
my_data_row[key] = all_data.get(key)
for stat in all_data['stats']:
stat_name = stat['stat']['name']
if stat_name not in header:
continue # skip this sub-stat
base_stat = stat['base_stat']
my_data[stat_name] = base_stat
# Write your transformed data to CSV
writer = csv.DictWriter(sys.stdout, fieldnames=header)
writer.writeheader()
writer.writerows(my_data_rows)

Retrieving all comments from a thread on Reddit

I’m new to API’s and working with JSON and would love some help here.
I know everything I’m trying to accomplish can be done using the PRAW library, but I’m trying to figure it out without PRAW.
I have a for loop that pulls post titles from a specific subreddit, inputs all the post titles into a pandas data frame, and after the limit is reached, changes the ‘after parameter to the last post id so it repeats with the next batch.
Everything worked perfectly, but when I tried the same technique with a specific thread and gathering the comments, the ‘after’ parameter doesn’t work to grab the next batch.
I’m assuming ‘after’ works differently with threads than with a subreddits posts. I saw in the JSON ‘more’ with a list of ids. Do I need to use this somehow? When I looked at the JSON for the thread, the ‘after’ says ‘none’ even with the updated parameters.
Any idea on what I need to change here? It’s probably something simple.
Working code for getting the subreddit posts with limit 5:
params = {"t":"day","limit":5}
for i in range(2):
response = requests.get('https://oauth.reddit.com/r/stocks/new',
headers=headers, params = params)
response = response.json()
for post in response['data']['children']:
name = post['data']['name']
print('name',name)
params['after'] = name
print(params)
Giving the output:
name t3_lifixn
name t3_lifg68
name t3_lif6u2
name t3_lif5o2
name t3_lif3cm
{'t': 'day', 'limit': 5, 'after': 't3_lif3cm'}
name t3_lif26d
name t3_lievhr
name t3_liev9i
name t3_liepud
name t3_lie41e
{'t': 'day', 'limit': 5, 'after': 't3_lie41e'}
Code for the Reddit thread with limit 10
params = {"limit":10}
for i in range(2):
response = requests.get('https://oauth.reddit.com/r/wallstreetbets/comments/lgrc39/',
params = params,headers=headers)
response = response.json()
for post in response[1]['data']['children']:
name = post['data']['name']
print(name)
params['after'] = name
print(params)
Giving the output:
t1_gmt20i4
t1_gmzo4xw
t1_gmzjofk
t1_gmzjkcy
t1_gmtotfl
{'limit': 10, 'after': 't1_gmtotfl'}
t1_gmt20i4
t1_gmzo4xw
t1_gmzjofk
t1_gmzjkcy
t1_gmtotfl
{'limit': 10, 'after': 't1_gmtotfl'}
Even though the limit was set to 10, it only gave 5 id's before continuing the loop. Also, rather than updating the 'after' parameter, it just restarted.
I ended up figuring out how to do it. Reading the documentation for Reddit's API, when in a thread and you want to pull more comments, you have to compile a list of the id's from the more sections in the JSON. It's a nested tree and looks like the following:
{'kind': 'more', 'data': {'count': 161, 'name': 't1_gmuram8', 'id': 'gmuram8', 'parent_id': 't1_gmt20i4', 'depth': 1, 'children': ['gmuram8', 'gmt6mf6', 'gmubxmr', 'gmt63gl', 'gmutw5j', 'gmtpitn', 'gmtoec3', 'gmtnel0', 'gmt4p79', 'gmupqhx', 'gmv70rm', 'gmtu2sj', 'gmt2vc7', 'gmtmjai', 'gmtje0b', 'gmtkzzj', 'gmt93n5', 'gmtvsqa', 'gmumhat', 'gmuj73q', 'gmtor7c', 'gmuqcwv', 'gmt3lxe', 'gmt4l78', 'gmum9cm', 'gmt857f', 'gmtjrz3', 'gmu0qcl', 'gmt9t9i', 'gmt8jc7', 'gmurron', 'gmt3ysv', 'gmt6neb', 'gmt4v3x', 'gmtoi6t']}}
When using the get request, you would use the following url and format
requests.get(https://oauth.reddit.com/api/morechildren/.json?api_type=json&link_id=t3_lgrc39&children=gmt20i4,gmuram8....etc)

Writing JSON data in python. Format

I have this method that writes json data to a file. The title is based on books and data is the book publisher,date,author, etc. The method works fine if I wanted to add one book.
Code
import json
def createJson(title,firstName,lastName,date,pageCount,publisher):
print "\n*** Inside createJson method for " + title + "***\n";
data = {}
data[title] = []
data[title].append({
'firstName:', firstName,
'lastName:', lastName,
'date:', date,
'pageCount:', pageCount,
'publisher:', publisher
})
with open('data.json','a') as outfile:
json.dump(data,outfile , default = set_default)
def set_default(obj):
if isinstance(obj,set):
return list(obj)
if __name__ == '__main__':
createJson("stephen-king-it","stephen","king","1971","233","Viking Press")
JSON File with one book/one method call
{
"stephen-king-it": [
["pageCount:233", "publisher:Viking Press", "firstName:stephen", "date:1971", "lastName:king"]
]
}
However if I call the method multiple times , thus adding more book data to the json file. The format is all wrong. For instance if I simply call the method twice with a main method of
if __name__ == '__main__':
createJson("stephen-king-it","stephen","king","1971","233","Viking Press")
createJson("william-golding-lord of the flies","william","golding","1944","134","Penguin Books")
My JSON file looks like
{
"stephen-king-it": [
["pageCount:233", "publisher:Viking Press", "firstName:stephen", "date:1971", "lastName:king"]
]
} {
"william-golding-lord of the flies": [
["pageCount:134", "publisher:Penguin Books", "firstName:william","lastName:golding", "date:1944"]
]
}
Which is obviously wrong. Is there a simple fix to edit my method to produce a correct JSON format? I look at many simple examples online on putting json data in python. But all of them gave me format errors when I checked on JSONLint.com . I have been racking my brain to fix this problem and editing the file to make it correct. However all my efforts were to no avail. Any help is appreciated. Thank you very much.
Simply appending new objects to your file doesn't create valid JSON. You need to add your new data inside the top-level object, then rewrite the entire file.
This should work:
def createJson(title,firstName,lastName,date,pageCount,publisher):
print "\n*** Inside createJson method for " + title + "***\n";
# Load any existing json data,
# or create an empty object if the file is not found,
# or is empty
try:
with open('data.json') as infile:
data = json.load(infile)
except FileNotFoundError:
data = {}
if not data:
data = {}
data[title] = []
data[title].append({
'firstName:', firstName,
'lastName:', lastName,
'date:', date,
'pageCount:', pageCount,
'publisher:', publisher
})
with open('data.json','w') as outfile:
json.dump(data,outfile , default = set_default)
A JSON can either be an array or a dictionary. In your case the JSON has two objects, one with the key stephen-king-it and another with william-golding-lord of the flies. Either of these on their own would be okay, but the way you combine them is invalid.
Using an array you could do this:
[
{ "stephen-king-it": [] },
{ "william-golding-lord of the flies": [] }
]
Or a dictionary style format (I would recommend this):
{
"stephen-king-it": [],
"william-golding-lord of the flies": []
}
Also the data you are appending looks like it should be formatted as key value pairs in a dictionary (which would be ideal). You need to change it to this:
data[title].append({
'firstName': firstName,
'lastName': lastName,
'date': date,
'pageCount': pageCount,
'publisher': publisher
})

PyXero Library Validation Exception

I'm trying to add a payment to xero using the pyxero python library for python3.
I'm able to add invoices and contacts, but payments always returns a validation exception.
Here is the data I'm submitting:
payments.put([{'Amount': '20.00',
'Date': datetime.date(2016, 5, 25),
'AccountCode': 'abc123',
'Reference': '8831_5213',
'InvoiceID': '09ff0465-d1b0-4fb3-9e2e-3db4e83bb240'}])
And the xero response:
xero.exceptions.XeroBadRequest: ValidationException: A validation exception occurred
Please note: this solution became a hack inside pyxero to get the result I needed. This may not be the best solution for you.
The XML that pyxero generates for "payments.put" does not match the "PUT Payments" XML structure found in the xero documentation.
I first changed the structure of your dictionary so that the XML generated in basemanager.py was similar to the documentation's.
data = {
'Invoice': {'InvoiceID': "09ff0465-d1b0-4fb3-9e2e-3db4e83bb240"},
'Account': {"AccountID": "58F8AD72-1F2E-AFA2-416C-8F660DDD661B"},
'Date': datetime.datetime.now(),
'Amount': 30.00,
}
xero.payments.put(data)
The error still persisted though, so I was forced to start changing code inside pyxero's basemanager.py.
In basemanager.py on line 133, change the formatting of the date:
val = sub_data.strftime('%Y-%m-%dT%H:%M:%S')
to:
val = sub_data.strftime('%Y-%m-%d')
pyxero is originally returning the Time. This is supposed to only be a Date value - The docs stipulate the formatting.
Then, again in basemanager.py, on line 257, change the following:
body = {'xml': self._prepare_data_for_save(data)}
to:
if self.name == "Payments":
body = {'xml': "<Payments>%s</Payments>" % self._prepare_data_for_save(data)}
else:
body = {'xml': self._prepare_data_for_save(data)}
Please note that in order for you to be able to create a payment in the first place, the Invoice's "Status" must be set to "AUTHORISED".
Also, make sure the Payment's "Amount" is no greater than Invoice's "AmountDue" value.

Boto - How to delete a record set from route53 -Tried to delete resource record set but it was not found

I am using the following to delete route53 records. I get no error messages.
conn = Route53Connection(aws_access_key_id, aws_secret_access_key)
changes = ResourceRecordSets(conn, zone_id)
change = changes.add_change("DELETE",sub_domain, "A", 60,weight=weight,identifier=identifier)
change.add_value(ip_old)
changes.commit()
all required fields are present and they match..weight, identifier,
ttl=60 etc.\
e.g.
test.com. A 111.111.111.111 60 1 id1
test.com. A 111.111.111.222 60 1 id2
I want to delete 111.111.111.222 and the record set.
So, what is the proper way to delete a record set?
For a record set, I will have multiple values that are distinguished
by a unique identifier. When an ip address becomes in active I want
to remove from route53. I am using a a poor mans load balancing.
Here is the meta of the record want to delete.
{'alias_dns_name': None,
'alias_hosted_zone_id': None,
'identifier': u'15754-1',
'name': u'hui.com.',
'resource_records': [u'103.4.xxx.xxx'],
'ttl': u'60',
'type': u'A',
'weight': u'1'}
Traceback (most recent call last):
File "/home/ubuntu/workspace/rtbopsConfig/classes/redis_ha.py", line 353, in <module>
deleteRedisSubDomains(aws_access_key_id, aws_secret_access_key,platform=platform,sub_domain=sub_domain,redis_domain=redis_domain,zone_id=zone_id,ip_address=ip_address,weight=1,identifier=identifier)
File "/home/ubuntu/workspace/rtbopsConfig/classes/redis_ha.py", line 341, in deleteRedisSubDomains
changes.commit()
File "/usr/local/lib/python2.7/dist-packages/boto-2.3.0-py2.7.egg/boto/route53/record.py", line 131, in commit
return self.connection.change_rrsets(self.hosted_zone_id, self.to_xml())
File "/usr/local/lib/python2.7/dist-packages/boto-2.3.0-py2.7.egg/boto/route53/connection.py", line 291, in change_rrsets
body)
boto.route53.exception.DNSServerError: DNSServerError: 400 Bad Request
<?xml version="1.0"?>
<ErrorResponse xmlns="https://route53.amazonaws.com/doc/2011-05-05/"><Error><Type>Sender</Type><Code>InvalidChangeBatch</Code><Message>Tried to delete resource record set hui.com., type A, SetIdentifier 15754-1 but it was not found</Message></Error><RequestId>9972af89-cb69-11e1-803b-7bde5b9c457d</RequestId></ErrorResponse>
Thanks
Are you sure you need all of those parameters for add_change?
Look at add_change here.
Default parameters are given to the function, so you may be over-specifying by providing weight and TTL.
Try leaving weight and TTL out (you may need to keep identifier). This blog provides a simple example of deleting records:
Also, I can't see the values of your parameters that you're passing, but ensure their integrity and try including a '.' at the end of your subdomain
I tried similar example and had to specify all fields including weight and ttl for a successful deletion. (By keeping it default, it did not work). Could not produce the original problem with weighted DNS record and explicitly passed ttl.
boto3
import boto3
client = boto3.connect('route53')
hosted_zone_id = "G5LEP7LWYS8WL2"
response = client.change_resource_record_sets(
ChangeBatch={
'Changes': [
{
'Action': 'DELETE',
'ResourceRecordSet': {
'Name': 'test.example.com',
'ResourceRecords': [
{
'Value': '10.1.1.1',
},
],
'Type': 'A',
},
},
]
},
HostedZoneId=hosted_zone_id,
)
print(response)
Source: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/route53.html#Route53.Client.change_resource_record_sets
boto
import boto
from boto.route53.record import ResourceRecordSets
conn = boto.connect_route53()
hosted_zone_id = "G5LEP7LWYS8WL2"
record_sets = ResourceRecordSets(conn, hosted_zone_id)
change = record_sets.add_change("DELETE", "test.example.com", "A")
change.add_value("10.1.1.1")
response = record_sets.commit()
print(response)
Source: https://danieljamesscott.org/17-software/development/33-manipulating-aws-route53-entries-using-boto.html

Categories