No implementation for Kind Exception Python Google App Engine - python

I'm trying to access a reference property of an ndb PolyModel subclass from a db Expando
subclass. My two classes look like this:
class Foo(polymodel.PolyModel):
...
class Bar(db.Expando):
...
foo_reference = db.ReferecnceProperty(None, collection_name='foos')
...
The two definitions are in different files.
I assign the reference the following way:
...
foo = Foo.query.get()
bar.foo_reference = ndb.Key.to_old_key(foo.key)
...
I have no problems doing this. I can see the entry stored in the database in the app engine dashboard, but when I try to access the foo_reference I get a 'No implementation for kind Foo' exception. The problem line looks like this:
foo = bar.foo_reference.get()
I have doble checked all my imports and can actually create a Foo entity where I try to access the entity.
Is there some restriction in the db reference properties for referencing ndb?
How do I fix this issue?

Your Bar and Foo classes need to be imported. until you have imported them, the underlying mechanism for retrieving entities and recreating instances of the Models can't find the Class.
Importing them creates a registry of class to entity's.
May be the path to the handler with the query isn't importing the models.
Looking further at your code, you are also mixing db and ndb, plus you have lots of typo's, and why are you using ndb.Key.to_old_key if you are using db for model definition rather than nbd, or is that another typo.

Related

Creating Object With A For Loop

Firstly, I do apologise as I'm not quite sure how to word this query within the Python syntax. I've just started learning it today having come from a predominantly PowerShell-based background.
I'm presently trying to obtain a list of projects within our organisation within Google Cloud. I want to display this information in two columns: project name and project number - essentially an object. I then want to be able to query the object to say: where project name is "X", give me the project number.
However, I'm rather having difficulty in creating said object. My code is as follows:
import os
from pprint import pprint
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('cloudresourcemanager', 'v1', credentials=credentials)
request = service.projects().list()
response = request.execute()
projects = response.get('projects')
The 'projects' variable then seems to be a list, rather than an object I can explore and run queries against. I've tried running things like:
pprint(projects.name)
projects.get('name')
Both of which return the error:
"AttributeError: 'list' object has no attribute 'name'"
I looked into creating a Class within a For loop as well, which nearly gave me what I wanted, but only displayed one project name and project number at a time, rather than the entire collection I can query against:
projects=[]
for project in response.get('projects', []):
class ProjectClass:
name = project['name']
projectNumber = project['projectNumber']
projects.append(ProjectClass.name)
projects.append(ProjectClass.projectNumber)
I thought if I stored each class in a list it might work, but alas, no such joy! Perhaps I need to have the For loop within the class variables?
Any help with this would be greatly appreciated!
As #Code-Apprentice mentioned in a comment, I think you are missing a critical understanding of object-oriented programming, namely the difference between a class and an object. Think of a class as a "blueprint" for creating objects. I.E. your class ProjectClass tells python that objects of type ProjectClass will have two fields, name and projectNumber. However, ProjectClass itself is just the blueprint, not an object. You then need to create an instance of ProjectClass, which you would do like so:
project_class_1 = ProjectClass()
Great, now you have an object of type ProjectClass, and it will have fields name and projectNumber, which you can reference like so:
project_class_1.name
project_class_1.projectNumber
However, you will notice that all instances of the class that you create will have the same value for name and projectNumber, this just won't do! We need to be able to specify values when we create each instance. Enter init(), a special python method colloquially referred to as the constructor. This function is called by python automatically when we create a new instance of our class as above, and is responsible for setting up all the fields of that class. Another powerful feature of classes and objects is that you can define a collection of different functions that can be called at will.
class ProjectClass:
def __init__(self, name, projectNumber):
self.name = name
self.projectNumber = projectNumber
Much better. But wait, what's that self variable? Well, just as before we were able reference the fields of our instance via the "project_class_1" variable name, we need a way to access the fields of our instance when we're running functions that are a part of that instance, right? Enter self. Self is another python builtin parameter that contains a reference to the current instance of the ProjectClass that is being accessed. That way, we can set fields on the instance of the class that will persist, but not be shared or overwritten by other instances of the ProjectClass. It's important to remember that the first argument passed to any function defined on a class will always be self (except for some edge-cases you don't need to worry about now).
So restructuring your code, you would have something like this:
class ProjectClass:
def __init__(self, name, projectNumber):
self.name = name
self.projectNumber = projectNumber
projects = []
for project in response.get('projects', []):
projects.append(ProjectClass(project["name"], project["projectNumber"])
Hopefully I've explained this well and given you a complete answer on how all these pieces fit together. The hope is for you to be able to write that code on your own and not just give you the answer!

Cross module variable sharing without duplicating imports

I have a module models.py with some data logic:
db = PostgresqlDatabase(database='database', user='user')
# models logic
and flask app which actually interacts with database:
from models import db, User, ...
But I want to move initializing all setting from one config file in flask app:
So I could separate importing db from other stuff (I need this for access to module variable db in models):
import models
from models import User, ...
app.config.from_object(os.environ['APP_SETTINGS'])
models.db = PostgresqlDatabase(database=app.config['db'],
user=app.config['db'])
and use further db as models.db
But seems it is kinda ugly. Duplicating imports, different usage of module stuff..
Is there any better way how to handle this situation?
I'd recommend 1 level of indirection, so that your code becomes like this:
import const
import runtime
def foo():
runtime.db.execute("SELECT ... LIMIT ?", (..., const.MAX_ROWS))
You get:
clear separation of leaf module const
mocking and/or reloading is possible
uniform and concise access in all user modules
To get rich API on runtime, use the "replace module with object at import" trick ( see __getattr__ on a module )

Specifying a key for datastore entities

I'm still trying to get my head around how the datastore works. I don't have previous experience with databases, so it's not a conflicting paradigm situation; I think I'm just confused about NDB's ancestor structure.
Let's say I have this model class:
class Spam(Model.ndb)
eggs = ndb.StringProperty();
So I create an instance and store it like this:
foo = Spam(eggs="some string")
foo.put()
I understand that put() returns a key that I could easily call get() on if I'm trying to access it from the same script, but is there a way to specify my own key, so I can easily access the foo entity from another script in my app?
I realize I can specify a parent for foo like this:
foo = Spam(parent=ndb.Key("Bar","Baz"),eggs="some string")
But from there, how would I use "Bar" and/or "Baz" to access foo in a different script?
Parents are used if you have a hierarchy. So if you have recipe books you would put the book as the parent and each recipe as a child. I don't think that's what you want.
If you want to set the key do this:
SuperEggs= Spam(id='SuperEggs', eggs="2 egg whites")
SuperEggs.put()
You can always let App engine set its own keys, this will prevent contention and accidental over rides, when you want access to the entity again simply do a get on some field. Add a name and search that.
FYI the returned id from the put lets you access from any part of your app (or any authorized app). The datastore is global not specific to a script.

trouble getting pylint to find inherited methods in pylons/SA models

I have a Pylons app that I'm using SqlAlchemy declarative models for. In order to make the code a bit cleaner I add a .query onto the SA Base and inherit all my models from that.
So in my app.model.meta I have
Base = declarative_base()
metadata = Base.metadata
Session = scoped_session(sessionmaker())
Base.query = Session.query_property(Query)
I think inherit this into app.model.mymodel and declare it as a child of meta.Base. This lets me write my queries as
mymodel.query.filter(mymodel.id == 3).all()
The trouble is that pylint is not seeing .query as a valid attribute of my models.
E:102:JobCounter.reset_count: Class 'JobCounter' has no 'query' member
Obviously this error is all over the place since it occurs on any model doing any query. I don't want to just skip the error because it might point out something down the road on non-orm classes, but I must be missing something for pylint to accept this.
Any hints?
Best I could find for this is to pass pylint a list of classes to ignore this check on. It'll still do other checks for these classes, you'll just have to maintain a list of these somewhere:
pylint --ignored-classes=MyModel1,MyModel2 myfile.py
I know it's not ideal, but there's something about the way that sqlalchemy sets up the models that confuses pylint. At least with this you still get the check for non-orm classes.

SqlAlchemy optimizations for read-only object models

I have a complex network of objects being spawned from a sqlite database using sqlalchemy ORM mappings. I have quite a few deeply nested:
for parent in owner.collection:
for child in parent.collection:
for foo in child.collection:
do lots of calcs with foo.property
My profiling is showing me that the sqlalchemy instrumentation is taking a lot of time in this use case.
The thing is: I don't ever change the object model (mapped properties) at runtime, so once they are loaded I don't NEED the instrumentation, or indeed any sqlalchemy overhead at all. After much research, I'm thinking I might have to clone a 'pure python' set of objects from my already loaded 'instrumented objects', but that would be a pain.
Performance is really crucial here (it's a simulator), so maybe writing those layers as C extensions using sqlite api directly would be best. Any thoughts?
If you reference a single attribute of a single instance lots of times, a simple trick is to store it in a local variable.
If you want a way to create cheap pure python clones, share the dict object with the original object:
class CheapClone(object):
def __init__(self, original):
self.__dict__ = original.__dict__
Creating a copy like this costs about half of the instrumented attribute access and attribute lookups are as fast as normal.
There might also be a way to have the mapper create instances of an uninstrumented class instead of the instrumented one. If I have some time, I might take a look how deeply ingrained is the assumption that populated instances are of the same type as the instrumented class.
Found a quick and dirty way that seems to at least somewhat work on 0.5.8 and 0.6. Didn't test it with inheritance or other features that might interact badly. Also, this touches some non-public API's, so beware of breakage when changing versions.
from sqlalchemy.orm.attributes import ClassManager, instrumentation_registry
class ReadonlyClassManager(ClassManager):
"""Enables configuring a mapper to return instances of uninstrumented
classes instead. To use add a readonly_type attribute referencing the
desired class to use instead of the instrumented one."""
def __init__(self, class_):
ClassManager.__init__(self, class_)
self.readonly_version = getattr(class_, 'readonly_type', None)
if self.readonly_version:
# default instantiation logic doesn't know to install finders
# for our alternate class
instrumentation_registry._dict_finders[self.readonly_version] = self.dict_getter()
instrumentation_registry._state_finders[self.readonly_version] = self.state_getter()
def new_instance(self, state=None):
if self.readonly_version:
instance = self.readonly_version.__new__(self.readonly_version)
self.setup_instance(instance, state)
return instance
return ClassManager.new_instance(self, state)
Base = declarative_base()
Base.__sa_instrumentation_manager__ = ReadonlyClassManager
Usage example:
class ReadonlyFoo(object):
pass
class Foo(Base, ReadonlyFoo):
__tablename__ = 'foo'
id = Column(Integer, primary_key=True)
name = Column(String(32))
readonly_type = ReadonlyFoo
assert type(session.query(Foo).first()) is ReadonlyFoo
You should be able to disable lazy loading on the relationships in question and sqlalchemy will fetch them all in a single query.
Try using a single query with JOINs instead of the python loops.

Categories