i'm developing a little app for a University project and i need to json encode the result of a query to pass it to a js library, i've read elsewhere that i can use model_to_dict to accomplish that, but i'm getting this error
AttributeError: 'SelectQuery' object has no attribute '_meta'
and i don't know why or what to do, does anyone know how to solve that?
I'm using python 2.7 and the last version of peewee
#app.route('/ormt')
def orm():
doitch = Player.select().join(Nationality).where(Nationality.nation % 'Germany')
return model_to_dict(doitch)
This is because doitch is a SelectQuery instance it is not model, you have to call get()
from flask import jsonify
#app.route('/ormt')
def orm():
doitch = Player.select().join(Nationality).where(Nationality.nation % 'Germany')
return jsonify(model_to_dict(doitch.get()))
Also you could use dicts method to get data as dict. This omits creation a whole model stuff.
from flask import jsonify
#app.route('/ormt')
def orm():
doitch = Player.select().join(Nationality).where(Nationality.nation % 'Germany')
return jsonify(doitch.dicts().get())
edit
As #lord63 pointed out, you cannot simply return dict, it must be a Flask response so convert it to jsonify.
edit 2
#app.route('/ormt')
def orm():
doitch = Player.select().join(Nationality).where(Nationality.nation % 'Germany')
# another query
sth = Something.select()
return jsonify({
'doitch': doitch.dicts().get(),
'something': sth_query.dicts().get()
})
Related
I'm currently working through the Flask Mega-Tutorial (Part XVI) and have gotten stuck implementing elasticsearch. Specifically, I get this error when running the following from my flask shell command line:
from app.search import add_to_index, remove_from_index, query_index
>>> for post in Post.query.all():
... add_to_index('posts', post)
AttributeError: module 'flask.app' has no attribute 'elasticsearch'
I should mention that I did not implement the app restructuring from the previous lesson to use blueprints. Here's what my files look like:
__init__.py:
#
from elasticsearch import Elasticsearch
app.elasticsearch = Elasticsearch([app.config['ELASTICSEARCH_URL']]) \
if app.config['ELASTICSEARCH_URL'] else None
config.py:
class Config(object):
#
ELASTICSEARCH_URL = 'http://localhost:9200'
search.py:
from flask import app
def add_to_index(index, model):
if not app.elasticsearch:
return
payload = {}
for field in model.__searchable__:
payload[field] = getattr(model, field)
app.elasticsearch.index(index=index, id=model.id, body=payload)
def remove_from_index(index, model):
if not app.elasticsearch:
return
app.elasticsearch.delete(index=index, id=model.id)
def query_index(index, query, page, per_page):
if not app.elasticsearch:
return [], 0
search = app.elasticsearch.search(
index=index,
body={'query': {'multi_match': {'query': query, 'fields': ['*']}},
'from': (page - 1) * per_page, 'size': per_page})
ids = [int(hit['_id']) for hit in search['hits']['hits']]
return ids, search['hits']['total']['value']
I think I'm not importing elasticsearch correctly into search.py but I'm not sure how to represent it given that I didn't do the restructuring in the last lesson. Any ideas?
The correct way to write it in the search.py file should be from flask import current_app
Not sure if you got this working, but the way I implemented it was by still using app.elasticsearch but instead within search.py do:
from app import app
I am writing my Python API using Flask. This API accept only 1 parameter called questionID. I would like it to accept a second parameter called lastDate. I tried to look around on how to add this parameter, but couldn't find a good method to do this. My current code looks as follows:
from flask import Flask, request
from flask_restful import Resource, Api, reqparse
from sqlalchemy import create_engine
from json import dumps
from flask_jsonpify import jsonify
import psycopg2
from pandas import read_sql
connenction_string = "DB Credentials'";
app = Flask(__name__)
api = Api(app)
class GetUserAnswers(Resource):
def get(self, questionID):
conn = psycopg2.connect(connenction_string);
cursor = conn.cursor();
userAnswers = read_sql('''
select * from <tablename> where questionid = ''' + "'" + questionID + "' order by timesansweredincorrectly desc limit 15" +'''
''', con=conn)
conn.commit();
conn.close();
result = {}
for index, row in userAnswers.iterrows():
result[index] = dict(row)
return jsonify(result)
api.add_resource(GetUserAnswers, '/GetUserAnswers/<questionID>')
if __name__ == '__main__':
app.run(port='5002')
Question 1: I'm guessing I can accept the second parameter in the get definition. If this is not true, how should I accept the second parameter?
Question 2: How do I modify the api.add_resource() call to accept the second parameter?
Question 3: I currently use http://localhost:5002/GetUserAnswers/<some question ID> to call this API from the browser. How would this call change with a second parameter?
I have never developed an API before, so any help would be much appreciated.
If you want to add multiple parameters within the url path for example:
http://localhost:5002/GetUserAnswers/<question_id>/answers/<answer_id>
Then you need to add multiple parameters to your get method:
def get(self, question_id, answer_id):
# your code here
But if you instead want to add multiple query parameters to the url for example:
http://localhost:5002/GetUserAnswers/<question_id>?lastDate=2020-01-01&totalCount=10>
Then you can use request arguments:
def get(self, question_id):
lastDate = request.args.get('lastDate')
totalCount = request.args.get('totalCount')
# your code here
Consider several adjustments to your code:
For simpler implementation as you have, use decorators in Flask API and avoid need to initialize and call the class object;
Use parameterization in SQL and avoid the potentially dangerous and messy string concatenation;
Avoid using the heavy data analytics library, pandas, and its inefficient row by row iterrows loop. Instead, handle everything with cursor object, specifically use DictCursor in psycopg2;
Refactored Python code (adjust assumption of how to use lastDate):
#... leave out the heavy pandas ...
app = Flask(__name__)
#app.route('/GetUserAnswers', methods= ['GET'])
def GetUserAnswers():
questionID = request.args.get('questionID', None)
lastDate = request.args.get('lastDate', None)
conn = psycopg2.connect(connenction_string)
cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
userAnswers = '''SELECT * FROM <tablename>
WHERE questionid = %s
AND lastdate = %s
ORDER BY timesansweredincorrectly DESC
LIMIT 15
'''
# EXECUTE SQL WITH PARAMS
cur.execute(userAnswers, (questionID, lastDate))
# SAVE TO LIST OF DICTIONARIES
result = [dict(row) for row in cur.fetchall()]
cur.close()
conn.close()
return jsonify(result)
if __name__ == '__main__':
app.run(port='5002')
Browser Call
http://localhost:5002/GetUserAnswers?questionID=8888&lastDate=2020-01-08
I am doing a project which has to do with rets, somehow my collaborator has problem using rets-client for js during installation.
Anyways, so I decided to find python instead which I found rets-python from https://github.com/opendoor-labs/rets
I am able to get data returned from rets, somehow I wanted to make an api so I installed flask (new with flask)
However I tried to return the data into JSON I just kept on getting errors
I this is my code
import json
from flask import Flask, Response, jsonify
app = Flask(__name__)
#app.route('/')
def mls():
from rets.client import RetsClient
client = RetsClient(
login_url='xxxxxxxxxxxx',
username='xxxxxxxxxxxx',
password='xxxxxxxxxxxxx',
# Ensure that you are using the right auth_type for this particular MLS
# auth_type='basic',
# Alternatively authenticate using user agent password
user_agent='xxxxxxxxxxxxxxxx',
user_agent_password='xxxxxxxxxxxxxxxx'
)
resource = client.get_resource('Property')
print(resource)
resource.key_field
print(resource.key_field)
resource_class = resource.get_class('RD_1')
print(resource_class)
search_result = resource_class.search(query='(L_Status=1_0)', limit=2)
I will write my returns here as I have tried few different returns
# return jsonify(results=search_result.data)
# TypeError: <Record: Property:RD_1:259957852> is not JSON serializable
return Response(json.dumps(search_result.data), mimetype='application/json')
# TypeError: <Record: Property:RD_1:259957852> is not JSON serializable
return json.dumps(search_result.data)
# TypeError: <Record: Property:RD_1:259957852> is not JSON serializable
I event tried making the results into dict such as dict(search_result.data) which gives me errors like TypeError: cannot convert dictionary update sequence element #0 to a sequence
if I try to get the print(type(search_result.data[0].data)) this is the return <class 'collections.OrderedDict'> I tried to jsonify, json.dump and also with Response with just this one record with or without dict() I get this error TypeError: Decimal('1152') is not JSON serializable
also print(type(search_result.data)) gives tuple
Anyone able to give me a hand on this?
Thanks in advance for any help and advises.
I have two GAE apps working in conjunction. One holds an object in a database, the other gets that object from the first app. Below I have the bit of code where the first app is asked for and gives the Critter Object. I am trying to access the first app's object via urllib2, is this really possible? I know it can be used for json but can it be used for objects?
Just for some context I am developing this as a project for a class. The students will learn how to host a GAE app by creating their critters. Then they will give me the url for their critters and my app will use the urls to collect all of their critters then put them into my app's world.
I've only recently heard about pickle, have not looked into yet, might that be a better alternative?
critter.py:
class Access(webapp2.RequestHandler):
def get(self):
creature = CritStore.all().order('-date').get()
if creature:
stats = loads(creature.stats)
return SampleCritter(stats)
else:
return SampleCritter()
map.py:
class Out(webapp2.RequestHandler):
def post(self):
url = self.request.POST['url']#from a simple html textbox
critter = urllib2.urlopen(url)
...work with critter as if it were the critter object...
Yes you can use pickle.
Here is some sample code to transfer an entity, including the key :
entity_dict = entity.to_dict() # First create a dict of the NDB entity
entity_dict['entity_ndb_key_safe'] = entity.key.urlsafe() # add the key of the entity to the dict
pickled_data = pickle.dumps(entity_dict, 1) # serialize the object
encoded_data = base64.b64encode(pickled_data) # encode it for safe transfer
As an alternative for urllib2 you can use the GAE urlfetch.fetch()
In the requesting app you can :
entity_dict = pickle.loads(base64.b64decode(encoded_data))
How can I get a JSON Object in python from getting data via Google App Engine Datastore?
I've got model in datastore with following field:
id
key_name
object
userid
created
Now I want to get all objects for one user:
query = Model.all().filter('userid', user.user_id())
How can I create a JSON object from the query so that I can write it?
I want to get the data via AJAX call.
Not sure if you got the answer you were looking for, but did you mean how to parse the model (entry) data in the Query object directly into a JSON object? (At least that's what I've been searching for).
I wrote this to parse the entries from Query object into a list of JSON objects:
def gql_json_parser(query_obj):
result = []
for entry in query_obj:
result.append(dict([(p, unicode(getattr(entry, p))) for p in entry.properties()]))
return result
You can have your app respond to AJAX requests by encoding it with simplejson e.g.:
query_data = MyModel.all()
json_query_data = gql_json_parser(query_data)
self.response.headers['Content-Type'] = 'application/json'
self.response.out.write(simplejson.dumps(json_query_data))
Your app will return something like this:
[{'property1': 'value1', 'property2': 'value2'}, ...]
Let me know if this helps!
If I understood you correctly I have implemented a system that works something like this. It sounds like you want to store an arbitrary JSON object in a GAE datastore model. To do this you need to encode the JSON into a string of some sort on the way into the database and decode it from a string into a python datastructure on the way out. You will need to use a JSON coder/decoder to do this. I think the GAE infrastructure includes one. For example you could use a "wrapper class" to handle the encoding/decoding. Something along these lines...
class InnerClass(db.Model):
jsonText = db.TextProperty()
def parse(self):
return OuterClass(self)
class Wrapper:
def __init__(self, storage=None):
self.storage = storage
self.json = None
if storage is not None:
self.json = fromJsonString(storage.jsonText)
def put(self):
jsonText = ToJsonString(self.json)
if self.storage is None:
self.storage = InnerClass()
self.storage.jsonText = jsonText
self.storage.put()
Then always operate on parsed wrapper objects instead of the inner class
def getall():
all = db.GqlQuery("SELECT * FROM InnerClass")
for x in all:
yield x.parse()
(untested). See datastoreview.py for some model implementations that work like this.
I did the following to convert the google query object to json. I used the logic in jql_json_parser above as well except for the part where everything is converted to unicode. I want to preserve the data-types like integer, floats and null.
import json
class JSONEncoder(json.JSONEncoder):
def default(self, obj):
if hasattr(obj, 'isoformat'): #handles both date and datetime objects
return obj.isoformat()
else:
return json.JSONEncoder.default(self, obj)
class BaseResource(webapp2.RequestHandler):
def to_json(self, gql_object):
result = []
for item in gql_object:
result.append(dict([(p, getattr(item, p)) for p in item.properties()]))
return json.dumps(result, cls=JSONEncoder)
Now you can subclass BaseResource and call self.to_json on the gql_object