Cloud Endpoints with Multiple Services Classes - python

I am starting to use Google Cloud Endpoints and I am running in a problem when specifying multiple services classes. Any idea how to get this working?
ApiConfigurationError: Attempting to implement service myservice, version v1, with multiple classes that aren't compatible. See docstring for api() for examples how to implement a multi-class API.
This is how I am creating my endpoint server.
AVAILABLE_SERVICES = [
FirstService,
SecondService
]
app = endpoints.api_server(AVAILABLE_SERVICES)
and for every service class I am doing this:
#endpoints.api(name='myservice', version='v1', description='MyService API')
class FirstService(remote.Service):
...
#endpoints.api(name='myservice', version='v1', description='MyService API')
class SecondService(remote.Service):
...
Each one of these work perfectly separately, but I am not sure how to get them working when combining them.
Thanks a lot.

The correct way is to create an api object and use the collection
api_root = endpoints.api(name='myservice', version='v1', description='MyService API')
#api_root.collection(resource_name='first')
class FirstService(remote.Service):
...
#api_root.collection(resource_name='second')
class SecondService(remote.Service):
...
where resource name would be inserted in front of method names so that you could use
#endpoints.method(name='method', ...)
def MyMethod(self, request):
...
instead of
#endpoints.method(name='first.method', ...)
def MyMethod(self, request):
...
Putting this in the API server:
The api_root object is equivalent to a remote.Service class decorated with endpoints.api, so you can simply include it in the endpoints.api_server list. For example:
application = endpoints.api_server([api_root, ...])

If I'm not mistaken, you should give different names to each service, so you'll be able to access both, each one with the specific "address".
#endpoints.api(name='myservice_one', version='v1', description='MyService One API')
class FirstService(remote.Service):
...
#endpoints.api(name='myservice_two', version='v1', description='MyService Two API')
class SecondService(remote.Service):
...

I've managed to successfuly deploy single api implemented in two classes. You can try using following snippet (almost directly from google documentation):
an_api = endpoints.api(name='library', version='v1.0')
#an_api.api_class(resource_name='shelves')
class Shelves(remote.Service):
...
#an_api.api_class(resource_name='books', path='books')
class Books(remote.Service):
...
APPLICATION = endpoints.api_server([an_api],
restricted=False)

For local development I'm using a temporary workaround, which is to disable the exception (I know I know...)
In my sdk in google_appengine/google/appengine/ext/endpoints/api_backend_service.py around line 97:
elif service_class != method_class:
pass
# raise api_config.ApiConfigurationError(
# 'SPI registered with multiple classes within one '
# 'configuration (%s and %s). Each call to register_spi should '
# 'only contain the methods from a single class. Call '
# 'repeatedly for multiple classes.' % (service_class,
# method_class))
if service_class is not None:
In combination with that I'm using the construct:
application = endpoints.api_server([FirstService, SecondService, ...])
Again, this won't work in production, you'll get the same exception there. Hopefully this answer will be obsoleted by a future fix.
Confirmed it's now obsolete (tested against 1.8.2).

If it was Java ...
https://developers.google.com/appengine/docs/java/endpoints/multiclass
cloudn't be easier.

Related

Django Mock an imported function used in a class function as part of a unit test

So I'm writing tests for my django application and I have successfully mocked quite a few external api calls that aren't needed for tests however one is tripping me up which is send_sms. To start here is the code:
a/models.py:
from utils.sms import send_sms
...
class TPManager(models.Manager):
def notification_for_job(self, job):
...
send_sms()
...
class TP(models.Model):
objects = TPManager()
...
p/test_models.py:
#patch('a.models.send_sms')
#patch('p.signals.send_mail')
def test_tradepro_review_job_deleted(self, send_mail, send_sms):
job = Job.objects.create(
tradeuser=self.tradeuser,
location=location,
category=category,
details="sample details for job"
)
The Job object creation triggers TP.objects.notification_for_job via its perform_create method here:
p/views.py:
def perform_create(self, serializer):
job = serializer.save(tradeuser=self.request.user.tradeuser)
if settings.DEV_MODE:
from a.models import TP
job.approved = True
job.save()
TP.objects.notification_for_job(job)
I have tried mocking a.models.TP.objects.notification_for_job, utils.sms.send_sms, a.models.TPManger.notification_for_job all to no avail. This is a pretty complex flow but I believe I have tried the main mock candidates here and was wondering if anybody knows how to either mock the notification_for_job function or send_sms function properly mostly just to prevent these api call that inevitably fail due to my test environment.
Any ideas are greatly appreciated!

How to mock a Django internal library using patch decorator

I'm mocking an internal library class (Server) of python that provides connection to HTTP JSON-RPC server. But when running the test the class is not mocking. The class is used calling a project class that is a wrapper for other class that effectively instantiates the Server class.
I extract here the parts of the code that give sense for what I'm talking about.
Unit test:
#patch('jsonrpc_requests.jsonrpc.Server')
def test_get_question_properties(self, mockServer):
lime_survey = Questionnaires()
# ...
Class Questionnaires:
class Questionnaires(ABCSearchEngine):
""" Wrapper class for LimeSurvey API"""
def get_question_properties(self, question_id, language):
return super(Questionnaires, self).get_question_properties(question_id, language)
Class Questionnaires calls the method get_question_properties from class ABCSearchEnginge(ABC). This class initializes the Server class to provide the connection to the external API.
Class ABCSearchEnginge:
class ABCSearchEngine(ABC):
session_key = None
server = None
def __init__(self):
self.get_session_key()
def get_session_key(self):
# HERE the self.server keep getting real Server class instead the mocked one
self.server = Server(
settings.LIMESURVEY['URL_API'] + '/index.php/admin/remotecontrol')
As the test is mocking Server class why it's not mocking? What is the missing parts?
From what i see you didnt add a return value.
Were did you put the mocked value in : #patch('jsonrpc_requests.jsonrpc.Server') ?
If you try to add a MagicMock what happend (Dont forget to add from mock import patch, MagicMock)?
#patch('jsonrpc_requests.Server', MagicMock('RETURN VALUE HERE'))
You also need to Mock the __init__ method (Where Server is this one from jsonrpc_requests import Server):
#patch.object(Server, '__init__', MagicMock(return_value=None))
I extrapolated your problem from my own understanding, maybe you need to fix some path (Mock need the exact path to do the job).

Create multiclass API [duplicate]

I am starting to use Google Cloud Endpoints and I am running in a problem when specifying multiple services classes. Any idea how to get this working?
ApiConfigurationError: Attempting to implement service myservice, version v1, with multiple classes that aren't compatible. See docstring for api() for examples how to implement a multi-class API.
This is how I am creating my endpoint server.
AVAILABLE_SERVICES = [
FirstService,
SecondService
]
app = endpoints.api_server(AVAILABLE_SERVICES)
and for every service class I am doing this:
#endpoints.api(name='myservice', version='v1', description='MyService API')
class FirstService(remote.Service):
...
#endpoints.api(name='myservice', version='v1', description='MyService API')
class SecondService(remote.Service):
...
Each one of these work perfectly separately, but I am not sure how to get them working when combining them.
Thanks a lot.
The correct way is to create an api object and use the collection
api_root = endpoints.api(name='myservice', version='v1', description='MyService API')
#api_root.collection(resource_name='first')
class FirstService(remote.Service):
...
#api_root.collection(resource_name='second')
class SecondService(remote.Service):
...
where resource name would be inserted in front of method names so that you could use
#endpoints.method(name='method', ...)
def MyMethod(self, request):
...
instead of
#endpoints.method(name='first.method', ...)
def MyMethod(self, request):
...
Putting this in the API server:
The api_root object is equivalent to a remote.Service class decorated with endpoints.api, so you can simply include it in the endpoints.api_server list. For example:
application = endpoints.api_server([api_root, ...])
If I'm not mistaken, you should give different names to each service, so you'll be able to access both, each one with the specific "address".
#endpoints.api(name='myservice_one', version='v1', description='MyService One API')
class FirstService(remote.Service):
...
#endpoints.api(name='myservice_two', version='v1', description='MyService Two API')
class SecondService(remote.Service):
...
I've managed to successfuly deploy single api implemented in two classes. You can try using following snippet (almost directly from google documentation):
an_api = endpoints.api(name='library', version='v1.0')
#an_api.api_class(resource_name='shelves')
class Shelves(remote.Service):
...
#an_api.api_class(resource_name='books', path='books')
class Books(remote.Service):
...
APPLICATION = endpoints.api_server([an_api],
restricted=False)
For local development I'm using a temporary workaround, which is to disable the exception (I know I know...)
In my sdk in google_appengine/google/appengine/ext/endpoints/api_backend_service.py around line 97:
elif service_class != method_class:
pass
# raise api_config.ApiConfigurationError(
# 'SPI registered with multiple classes within one '
# 'configuration (%s and %s). Each call to register_spi should '
# 'only contain the methods from a single class. Call '
# 'repeatedly for multiple classes.' % (service_class,
# method_class))
if service_class is not None:
In combination with that I'm using the construct:
application = endpoints.api_server([FirstService, SecondService, ...])
Again, this won't work in production, you'll get the same exception there. Hopefully this answer will be obsoleted by a future fix.
Confirmed it's now obsolete (tested against 1.8.2).
If it was Java ...
https://developers.google.com/appengine/docs/java/endpoints/multiclass
cloudn't be easier.

Argparse Subparsers, and linking to Classes

We have a simple Python program to manage various types of in-house servers, using argparse:
manage_servers.py <operation> <type_of_server>
Operations are things like check, build, deploy, configure, verify etc.
Types of server are just different types of inhouse servers we use.
We have a generic server class, then specific types that inherit from that:
class Server
def configure_logging(self, loggin_file):
...
def check(self):
...
def deploy(self):
...
def configure(self):
...
def __init__(self, hostname):
self.hostname = hostname
logging = self.configure_logging(LOG_FILENAME)
class SpamServer(Server):
def check(self):
...
class HamServer(Server):
def deploy(self):
...
My question is how to link that all up to argparse?
Originally, I was using argparse subparses for the operations (check, build, deploy) and another argument for the type.
subparsers = parser.add_subparsers(help='The operation that you want to run on the server.')
parser_check = subparsers.add_parser('check', help='Check that the server has been setup correctly.')
parser_build = subparsers.add_parser('build', help='Download and build a copy of the execution stack.')
parser_build.add_argument('-r', '--revision', help='SVN revision to build from.')
...
parser.add_argument('type_of_server', action='store', choices=types_of_servers,
help='The type of server you wish to create.')
Normally, you'd link each subparse to a method - and then pass in the type_of_server as an argument. However, that's slightly backwards due to the classes- I need to create an instance of the appropriate Server class, then call the operation method inside of that.
Any ideas of how I could achieve the above? Perhaps a different design pattern for Servers? Or a way to still use argparse as is?
Cheers,
Victor
Just use the parser.add_subparsers(dest=... argument with a mapping of type_of_server to classes:
subparsers = parser.add_subparsers(dest='operation', help='The operation that you want to run on the server.')
...
server_types = dict(spam=SpamServer, ham=HamServer)
args = parser.parse_args()
server = server_types[args.type_of_server]()
getattr(server, args.operation)(args)

App Engine (Python) Datastore Precall API Hooks

Background
So let's say I'm making app for GAE, and I want to use API Hooks.
BIG EDIT: In the original version of this question, I described my use case, but some folks correctly pointed out that it was not really suited for API Hooks. Granted! Consider me helped. But now my issue is academic: I still don't know how to use hooks in practice, and I'd like to. I've rewritten my question to make it much more generic.
Code
So I make a model like this:
class Model(db.Model):
user = db.UserProperty(required=True)
def pre_put(self):
# Sets a value, raises an exception, whatever. Use your imagination
And then I create a db_hooks.py:
from google.appengine.api import apiproxy_stub_map
def patch_appengine():
def hook(service, call, request, response):
assert service == 'datastore_v3'
if call == 'Put':
for entity in request.entity_list():
entity.pre_put()
apiproxy_stub_map.apiproxy.GetPreCallHooks().Append('preput',
hook,
'datastore_v3')
Being TDD-addled, I'm making all this using GAEUnit, so in gaeunit.py, just above the main method, I add:
import db_hooks
db_hooks.patch_appengine()
And then I write a test that instantiates and puts a Model.
Question
While patch_appengine() is definitely being called, the hook never is. What am I missing? How do I make the pre_put function actually get called?
Hooks are a little low level for the task at hand. What you probably want is a custom property class. DerivedProperty, from aetycoon, is just the ticket.
Bear in mind, however, that the 'nickname' field of the user object is probably not what you want - per the docs, it's simply the user part of the email field if they're using a gmail account, otherwise it's their full email address. You probably want to let users set their own nicknames, instead.
The issue here is that within the context of the hook() function an entity is not an instance of db.Model as you are expecting.
In this context entity is the protocol buffer class confusingly referred to as entity (entity_pb). Think of it like a JSON representation of your real entity, all the data is there, and you could build a new instance from it, but there is no reference to your memory-resident instance that is waiting for it's callback.
Monkey patching all of the various put/delete methods is the best way to setup Model-level callbacks as far as I know†
Since there doesn't seem to be that many resources on how to do this safely with the newer async calls, here's a BaseModel that implements before_put, after_put, before_delete & after_delete hooks:
class HookedModel(db.Model):
def before_put(self):
logging.error("before put")
def after_put(self):
logging.error("after put")
def before_delete(self):
logging.error("before delete")
def after_delete(self):
logging.error("after delete")
def put(self):
return self.put_async().get_result()
def delete(self):
return self.delete_async().get_result()
def put_async(self):
return db.put_async(self)
def delete_async(self):
return db.delete_async(self)
Inherit your model-classes from HookedModel and override the before_xxx,after_xxx methods as required.
Place the following code somewhere that will get loaded globally in your applicaiton (like main.py if you use a pretty standard looking layout). This is the part that calls our hooks:
def normalize_entities(entities):
if not isinstance(entities, (list, tuple)):
entities = (entities,)
return [e for e in entities if hasattr(e, 'before_put')]
# monkeypatch put_async to call entity.before_put
db_put_async = db.put_async
def db_put_async_hooked(entities, **kwargs):
ents = normalize_entities(entities)
for entity in ents:
entity.before_put()
a = db_put_async(entities, **kwargs)
get_result = a.get_result
def get_result_with_callback():
for entity in ents:
entity.after_put()
return get_result()
a.get_result = get_result_with_callback
return a
db.put_async = db_put_async_hooked
# monkeypatch delete_async to call entity.before_delete
db_delete_async = db.delete_async
def db_delete_async_hooked(entities, **kwargs):
ents = normalize_entities(entities)
for entity in ents:
entity.before_delete()
a = db_delete_async(entities, **kwargs)
get_result = a.get_result
def get_result_with_callback():
for entity in ents:
entity.after_delete()
return get_result()
a.get_result = get_result_with_callback
return a
db.delete_async = db_delete_async_hooked
You can save or destroy your instances via model.put() or any of the db.put(), db.put_async() etc, methods and get the desired effect.
†would love to know if there is an even better solution!?
I don't think that Hooks are really going to solve this problem. The Hooks will only run in the context of your AppEngine application, but the user can change their nickname outside of your application using Google Account settings. If they do that, it won't trigger any logic implement in your hooks.
I think that the real solution to your problem is for your application to manage its own nickname that is independent of the one exposed by the Users entity.

Categories