Logging extra information in asynchronous programming - python

Despite example below is written on Python, it's not so python-dependent.
I need to pass extra per-request information every time like client-id for every single log item including ORM without modification of every library and application we use.
With synchronous programming, like you can see in example below, it's easy as we can create extra to pass and modify this extra one time per each request. But what to do in asynchronous programming as this technique becomes useless?
The sample code here is written on Python as it's easiest way to illustrate problem here but question itself is language-independent.
Logger configuration file:
# logging_config.py
extra = {
'application': None,
'client_id': None
}
# use custom logstash formatter to send this extra
Our main application file:
#!/usr/bin/env python3
import logging
import our_lib.logging_config as log_cfg
log = logging.getLogger(__name__)
def application_setup():
""" Sets up application and logging. """
log_cfg['application'] = 'OurWebApplication'
... # do the rest here
Our request processor:
# api_handler.py
import logging
log = logging.getLogger(__name__)
... # rest of the file
def extract_client_id(request):
""" Extracts Client ID from request or session. """
...
# some web API request handle
#route('/api/do_smth')
def api_do_smth_handler(request):
""" Handles `do_smth` web request. """
log_cfg.extra['client_id'] = extract_client_id(request)
...
Some library module:
import logging
log = logging.getLogger(__name__)
def process_data(*args, **kwargs):
""" Processes arguments. """
try:
... # process arguments
except MyException:
log.exception("argument processing failed") # <-- how to log client ID here?

Related

Issue in sending python logs to Splunk using splunk_hec_handler

I am using Python logging library to push logs to splunk. This package use HEC method to push logs to splunk.
Issue I am facing is that out of many logger statements in my application, I want selectively only few logger statements to splunk not all.
So i created one method below method which converts string logs in json (key/value) and pushes into splunk. So I am calling this method just after the logger statement I wish to push to splunk. But rest all the logger statements which i dont wish to send to splunk they are also getting pushed to splunk.
Why is this happening?
class Test:
def __init__(self):
self.logger = logging.getLogger('myapp')
def method_test(self,url,data,headers):
response = requests.post(url=url, data=json.dumps(data), headers=abc.headers)
##i dont want to push this below log message to splunk but this is also getting pushed to splunk
self.logger.debug(f"request code:{response.request.url} request body:{response.request.body}")
##I wish to send this below log to splunk
self.logger.debug(f"response code:{response.status_code} response body:{response.text}")
log_dic = {'response_code': response.status_code,'response_body': response.text}
splunklogger = self.logging_override(log_dic, self.splunk_host,
self.index_token, self.port,
self.proto, self.ssl_verify,
self.source)
splunklogger.info(log_dic)
return response
def logging_override(log_dict: dict, splunk_host,index_token,splunk_port,splunk_proto,ssl_ver,source_splnk):
"""
This function help in logging custom fields in JSON key value form by defining fields of our choice in log_dict dictionary
and pushes logs to Splunk Server
"""
splunklogger = logging.getLogger()
splunklogger.setLevel(logging.INFO)
stream_handler = logging.StreamHandler()
basic_dict = {"time": "%(asctime)s", "level": "%(levelname)s"}
full_dict = {**basic_dict, **log_dict}
stream_formatter = logging.Formatter(json.dumps(full_dict))
stream_handler.setFormatter(stream_formatter)
if not splunklogger.handlers:
splunklogger.addHandler(stream_handler)
splunklogger.handlers[0] = stream_handler
splunk_handler = SplunkHecHandler(splunk_host,
index_token,
port=splunk_port, proto=splunk_proto, ssl_verify=ssl_ver,
source=source_splnk)
splunklogger.addHandler(splunk_handler)
splunklogger.addHandler(splunk_handler)
return splunklogger
I believe that the problem is with your calls to logging.getLogger, namely when you're configuring your app logger, you're specifying a logger name, but when you're configuring the splunk logger, you're not specifying any and therefore getting, configuring, and attaching the SplunkHandler to the root logger.
As events come in to the lower level loggers by default they propagate their events to higher level loggers (e.g. the root logger) and thus get emitted to Splunk.
I suspect an easy solution would be to look at your logger names... possibly put the Splunk logger at a lower level than your component? or look into the propagation of loggers. The same docs page linked above talks a bit about logger objects and their propagation.

Google Cloud Logging malfunctioning from cloud function

I am currently trying to deploy a cloud function triggered by Pub/Sub written in Python. Previously, we used loguru to log. I am now making the switch to the cloud logging. I thought it would be rather simple but am quite puzzled. Here is the code I deployed in a Cloud Function, just to try logging :
import base64
import logging
import google.cloud.logging as google_logging
def hello_pubsub(event, context):
client = google_logging.Client()
client.setup_logging()
logging.debug("Starting function")
logging.info("Hello")
logging.warning("warning ! ")
pubsub_message = base64.b64decode(event['data']).decode('utf-8')
logging.info(pubsub_message)
logging.error("Exit function")
I followed the documentation I could find on the subject (but the pages can show various methods, and are not very clear). Here is the result on the Logging interface :
This is the result on the "Global" logs. Two questions here : why aren't the debug logs not shown, even if I explicitely set the log level as "debug" in the interface ? And why the logs are shown 1, 2 or 3 times, randomly ?
Now I try to display the logs for my Cloud Function only :
This is getting worse, now the logs are displayed up to 5 times (and not even the same number of times than in the "Global" tab), the levels of informations are all wrong (logging.info results in 1 info line, 1 error line ; error and warning results in 2 errors lines...)
I imagine I must be doing something bad, but I can't see what, as what I am trying to do is fairly simple. Can somebody please help me ? Thanks !
EDIT : I did the mistake of putting the initialization of the client in the function, this explains that the logs were displayed more than once. One problem left is that the warnings are displayed as errors in the "Cloud Function" tabs, and displayed correctly in the "Global" tab. Do someone has an idea about this ?
Try moving your setup outside of the function:
import base64
import logging
import google.cloud.logging as google_logging
client = google_logging.Client()
client.setup_logging()
def hello_pubsub(event, context):
logging.debug("Starting function")
logging.info("Hello")
logging.warning("warning ! ")
pubsub_message = base64.b64decode(event['data']).decode('utf-8')
logging.info(pubsub_message)
logging.error("Exit function")
As-is, you're adding new handlers on every request per instance.
You should use the Integration with Python logging module¶
import logging
import base64
import google.cloud.logging # Don't conflict with standard logging
from google.cloud.logging.handlers import CloudLoggingHandler
client = google.cloud.logging.Client()
handler = CloudLoggingHandler(client)
cloud_logger = logging.getLogger('cloudLogger')
cloud_logger.setLevel(logging.INFO) # defaults to WARN
cloud_logger.addHandler(handler)
def hello_pubsub(event, context):
import logging
cloud_logger.debug("Starting function")
cloud_logger.info("Hello")
cloud_logger.warning("warning ! ")
pubsub_message = base64.b64decode(event['data']).decode('utf-8')
cloud_logger.info(pubsub_message)
cloud_logger.error("Exit function")
return 'OK', 200

Disable logging for a python module in one context but not another

I have some code that uses the requests module to communicate with a logging API. However, requests itself, through urllib3, does logging. Naturally, I need to disable logging so that requests to the logging API don't cause an infinite loop of logs. So, in the module I do the logging calls in, I do logging.getLogger("requests").setLevel(logging.CRITICAL) to mute routine request logs.
However, this code is intended to load and run arbitrary user code. Since the python logging module apparently uses global state to manage settings for a given logger, I am worried the user's code might turn logging back on and cause problems, for instance if they naively use the requests module in their code without realizing I have disabled logging for it for a reason.
How can I disable logging for the requests module when it is executed from the context of my code, but not affect the state of the logger for the module from the perspective of the user? Some sort of context manager that silences calls to logging for code within the manager would be ideal. Being able to load the requests module with a unique __name__ so the logger uses a different name could also work, though it's a bit convoluted. I can't find a way to do either of these things, though.
Regrettably, the solution will need to handle multiple threads, so procedurally turning off logging, then running the API call, then turning it back on will not work as global state is mutated.
I think I've got a solution for you:
The logging module is built to be thread-safe:
The logging module is intended to be thread-safe without any special
work needing to be done by its clients. It achieves this though using
threading locks; there is one lock to serialize access to the module’s
shared data, and each handler also creates a lock to serialize access
to its underlying I/O.
Fortunately, it exposes the second lock mentioned though a public API: Handler.acquire() lets you acquire a lock for a particular log handler (and Handler.release() releases it again). Acquiring that lock will block all other threads that try to log a record that would be handled by this handler until the lock is released.
This allows you to manipulate the handler's state in a thread-safe way. The caveat is this: Because it's intended as a lock around the I/O operations of the handler, the lock will only be acquired in emit(). So only once a record makes it through filters and log levels and would be emitted by a particular handler will the lock be acquired. That's why I had to subclass a handler and create the SilencableHandler.
So the idea is this:
Get the topmost logger for the requests module and stop propagation for it
Create your custom SilencableHandler and add it to the requests logger
Use the Silenced context manager to selectively silence the SilencableHandler
main.py
from Queue import Queue
from threading import Thread
from usercode import fetch_url
import logging
import requests
import time
logging.basicConfig(level=logging.INFO)
log = logging.getLogger(__name__)
class SilencableHandler(logging.StreamHandler):
def __init__(self, *args, **kwargs):
self.silenced = False
return super(SilencableHandler, self).__init__(*args, **kwargs)
def emit(self, record):
if not self.silenced:
super(SilencableHandler, self).emit(record)
requests_logger = logging.getLogger('requests')
requests_logger.propagate = False
requests_handler = SilencableHandler()
requests_logger.addHandler(requests_handler)
class Silenced(object):
def __init__(self, handler):
self.handler = handler
def __enter__(self):
log.info("Silencing requests logger...")
self.handler.acquire()
self.handler.silenced = True
return self
def __exit__(self, exc_type, exc_value, traceback):
self.handler.silenced = False
self.handler.release()
log.info("Requests logger unsilenced.")
NUM_THREADS = 2
queue = Queue()
URLS = [
'http://www.stackoverflow.com',
'http://www.stackexchange.com',
'http://www.serverfault.com',
'http://www.superuser.com',
'http://travel.stackexchange.com',
]
for i in range(NUM_THREADS):
worker = Thread(target=fetch_url, args=(i, queue,))
worker.setDaemon(True)
worker.start()
for url in URLS:
queue.put(url)
log.info('Starting long API request...')
with Silenced(requests_handler):
time.sleep(5)
requests.get('http://www.example.org/api')
time.sleep(5)
log.info('Done with long API request.')
queue.join()
usercode.py
import logging
import requests
import time
logging.basicConfig(level=logging.INFO)
log = logging.getLogger(__name__)
def fetch_url(i, q):
while True:
url = q.get()
response = requests.get(url)
logging.info("{}: {}".format(response.status_code, url))
time.sleep(i + 2)
q.task_done()
Example output:
(Notice how the call to http://www.example.org/api isn't logged, and all threads that try to log requests are blocked for the first 10 seconds).
INFO:__main__:Starting long API request...
INFO:__main__:Silencing requests logger...
INFO:__main__:Requests logger unsilenced.
INFO:__main__:Done with long API request.
Starting new HTTP connection (1): www.stackoverflow.com
Starting new HTTP connection (1): www.stackexchange.com
Starting new HTTP connection (1): stackexchange.com
Starting new HTTP connection (1): stackoverflow.com
INFO:root:200: http://www.stackexchange.com
INFO:root:200: http://www.stackoverflow.com
Starting new HTTP connection (1): www.serverfault.com
Starting new HTTP connection (1): serverfault.com
INFO:root:200: http://www.serverfault.com
Starting new HTTP connection (1): www.superuser.com
Starting new HTTP connection (1): superuser.com
INFO:root:200: http://www.superuser.com
Starting new HTTP connection (1): travel.stackexchange.com
INFO:root:200: http://travel.stackexchange.com
Threading code is based on Doug Hellmann's articles on threading and queues.

Using pytest with gaesessions session middleware in appengine

When I run py.test --with-gae, I get the following error (I have pytest_gae plugin installed):
def get_current_session():
"""Returns the session associated with the current request."""
> return _tls.current_session
E AttributeError: 'thread._local' object has no attribute 'current_session'
gaesessions/__init__.py:50: AttributeError
I'm using pytest to test my google appengine application. The application runs fine when run in the localhost SDK or when deployed to GAE servers. I just can't figure out how to make pytest work with gaesessions.
My code is below:
test_handlers.py
from webtest import TestApp
import appengine_config
def pytest_funcarg__anon_user(request):
from main import app
app = appengine_config.webapp_add_wsgi_middleware(app)
return TestApp(app)
def test_session(anon_user):
from gaesessions import get_current_session
assert get_current_session()
appengine_config.py
from gaesessions import SessionMiddleware
def webapp_add_wsgi_middleware(app):
from google.appengine.ext.appstats import recording
app = recording.appstats_wsgi_middleware(app)
app = SessionMiddleware(app, cookie_key="replaced-with-this-boring-text")
return app
Relevant code from gaesessions:
# ... more code are not show here ...
_tls = threading.local()
def get_current_session():
"""Returns the session associated with the current request."""
return _tls.current_session
# ... more code are not show here ...
class SessionMiddleware(object):
"""WSGI middleware that adds session support.
``cookie_key`` - A key used to secure cookies so users cannot modify their
content. Keys should be at least 32 bytes (RFC2104). Tip: generate your
key using ``os.urandom(64)`` but do this OFFLINE and copy/paste the output
into a string which you pass in as ``cookie_key``. If you use ``os.urandom()``
to dynamically generate your key at runtime then any existing sessions will
become junk every time your app starts up!
``lifetime`` - ``datetime.timedelta`` that specifies how long a session may last. Defaults to 7 days.
``no_datastore`` - By default all writes also go to the datastore in case
memcache is lost. Set to True to never use the datastore. This improves
write performance but sessions may be occassionally lost.
``cookie_only_threshold`` - A size in bytes. If session data is less than this
threshold, then session data is kept only in a secure cookie. This avoids
memcache/datastore latency which is critical for small sessions. Larger
sessions are kept in memcache+datastore instead. Defaults to 10KB.
"""
def __init__(self, app, cookie_key, lifetime=DEFAULT_LIFETIME, no_datastore=False, cookie_only_threshold=DEFAULT_COOKIE_ONLY_THRESH):
self.app = app
self.lifetime = lifetime
self.no_datastore = no_datastore
self.cookie_only_thresh = cookie_only_threshold
self.cookie_key = cookie_key
if not self.cookie_key:
raise ValueError("cookie_key MUST be specified")
if len(self.cookie_key) < 32:
raise ValueError("RFC2104 recommends you use at least a 32 character key. Try os.urandom(64) to make a key.")
def __call__(self, environ, start_response):
# initialize a session for the current user
_tls.current_session = Session(lifetime=self.lifetime, no_datastore=self.no_datastore, cookie_only_threshold=self.cookie_only_thresh, cookie_key=self.cookie_key)
# create a hook for us to insert a cookie into the response headers
def my_start_response(status, headers, exc_info=None):
_tls.current_session.save() # store the session if it was changed
for ch in _tls.current_session.make_cookie_headers():
headers.append(('Set-Cookie', ch))
return start_response(status, headers, exc_info)
# let the app do its thing
return self.app(environ, my_start_response)
The problem is that your gae sessions is not yet called until the app is also called. The app is only called when you make a request to it. Try inserting a request call before you check for the session value. Check out the revised test_handlers.py code below.
def test_session(anon_user):
anon_user.get("/") # get any url to call the app to create a session.
from gaesessions import get_current_session
assert get_current_session()

How to unit test Google Cloud Endpoints

I'm needing some help setting up unittests for Google Cloud Endpoints. Using WebTest all requests answer with AppError: Bad response: 404 Not Found. I'm not really sure if endpoints is compatible with WebTest.
This is how the application is generated:
application = endpoints.api_server([TestEndpoint], restricted=False)
Then I use WebTest this way:
client = webtest.TestApp(application)
client.post('/_ah/api/test/v1/test', params)
Testing with curl works fine.
Should I write tests for endpoints different? What is the suggestion from GAE Endpoints team?
After much experimenting and looking at the SDK code I've come up with two ways to test endpoints within python:
1. Using webtest + testbed to test the SPI side
You are on the right track with webtest, but just need to make sure you correctly transform your requests for the SPI endpoint.
The Cloud Endpoints API front-end and the EndpointsDispatcher in dev_appserver transforms calls to /_ah/api/* into corresponding "backend" calls to /_ah/spi/*. The transformation seems to be:
All calls are application/json HTTP POSTs (even if the REST endpoint is something else).
The request parameters (path, query and JSON body) are all merged together into a single JSON body message.
The "backend" endpoint uses the actual python class and method names in the URL, e.g. POST /_ah/spi/TestEndpoint.insert_message will call TestEndpoint.insert_message() in your code.
The JSON response is only reformatted before being returned to the original client.
This means you can test the endpoint with the following setup:
from google.appengine.ext import testbed
import webtest
# ...
def setUp(self):
tb = testbed.Testbed()
tb.setup_env(current_version_id='testbed.version') #needed because endpoints expects a . in this value
tb.activate()
tb.init_all_stubs()
self.testbed = tb
def tearDown(self):
self.testbed.deactivate()
def test_endpoint_insert(self):
app = endpoints.api_server([TestEndpoint], restricted=False)
testapp = webtest.TestApp(app)
msg = {...} # a dict representing the message object expected by insert
# To be serialised to JSON by webtest
resp = testapp.post_json('/_ah/spi/TestEndpoint.insert', msg)
self.assertEqual(resp.json, {'expected': 'json response msg as dict'})
The thing here is you can easily setup appropriate fixtures in the datastore or other GAE services prior to calling the endpoint, thus you can more fully assert the expected side effects of the call.
2. Starting the development server for full integration test
You can start the dev server within the same python environment using something like the following:
import sys
import os
import dev_appserver
sys.path[1:1] = dev_appserver._DEVAPPSERVER2_PATHS
from google.appengine.tools.devappserver2 import devappserver2
from google.appengine.tools.devappserver2 import python_runtime
# ...
def setUp(self):
APP_CONFIGS = ['/path/to/app.yaml']
python_runtime._RUNTIME_ARGS = [
sys.executable,
os.path.join(os.path.dirname(dev_appserver.__file__),
'_python_runtime.py')
]
options = devappserver2.PARSER.parse_args([
'--admin_port', '0',
'--port', '8123',
'--datastore_path', ':memory:',
'--logs_path', ':memory:',
'--skip_sdk_update_check',
'--',
] + APP_CONFIGS)
server = devappserver2.DevelopmentServer()
server.start(options)
self.server = server
def tearDown(self):
self.server.stop()
Now you need to issue actual HTTP requests to localhost:8123 to run tests against the API, but again can interact with GAE APIs to set up fixtures, etc. This is obviously slow as you're creating and destroying a new dev server for every test run.
At this point I use the Google API Python client to consume the API instead of building the HTTP requests myself:
import apiclient.discovery
# ...
def test_something(self):
apiurl = 'http://%s/_ah/api/discovery/v1/apis/{api}/{apiVersion}/rest' \
% self.server.module_to_address('default')
service = apiclient.discovery.build('testendpoint', 'v1', apiurl)
res = service.testresource().insert({... message ... }).execute()
self.assertEquals(res, { ... expected reponse as dict ... })
This is an improvement over testing with CURL as it gives you direct access to the GAE APIs to easily set up fixtures and inspect internal state. I suspect there is an even better way to do integration testing that bypasses HTTP by stitching together the minimal components in the dev server that implement the endpoint dispatch mechanism, but that requires more research time than I have right now.
webtest can be simplified to reduce naming bugs
for the following TestApi
import endpoints
import protorpc
import logging
class ResponseMessageClass(protorpc.messages.Message):
message = protorpc.messages.StringField(1)
class RequestMessageClass(protorpc.messages.Message):
message = protorpc.messages.StringField(1)
#endpoints.api(name='testApi',version='v1',
description='Test API',
allowed_client_ids=[endpoints.API_EXPLORER_CLIENT_ID])
class TestApi(protorpc.remote.Service):
#endpoints.method(RequestMessageClass,
ResponseMessageClass,
name='test',
path='test',
http_method='POST')
def test(self, request):
logging.info(request.message)
return ResponseMessageClass(message="response message")
the tests.py should look like this
import webtest
import logging
import unittest
from google.appengine.ext import testbed
from protorpc.remote import protojson
import endpoints
from api.test_api import TestApi, RequestMessageClass, ResponseMessageClass
class AppTest(unittest.TestCase):
def setUp(self):
logging.getLogger().setLevel(logging.DEBUG)
tb = testbed.Testbed()
tb.setup_env(current_version_id='testbed.version')
tb.activate()
tb.init_all_stubs()
self.testbed = tb
def tearDown(self):
self.testbed.deactivate()
def test_endpoint_testApi(self):
application = endpoints.api_server([TestApi], restricted=False)
testapp = webtest.TestApp(application)
req = RequestMessageClass(message="request message")
response = testapp.post('/_ah/spi/' + TestApi.__name__ + '.' + TestApi.test.__name__, protojson.encode_message(req),content_type='application/json')
res = protojson.decode_message(ResponseMessageClass,response.body)
self.assertEqual(res.message, 'response message')
if __name__ == '__main__':
unittest.main()
I tried everything I could think of to allow these to be tested in the normal way. I tried hitting the /_ah/spi methods directly as well as even trying to create a new protorpc app using service_mappings to no avail. I'm not a Googler on the endpoints team so maybe they have something clever to allow this to work but it doesn't appear that simply using webtest will work (unless I missed something obvious).
In the meantime you can write a test script that starts the app engine test server with an isolated environment and just issue http requests to it.
Example to run the server with an isolated environment (bash but you can easily run this from python):
DATA_PATH=/tmp/appengine_data
if [ ! -d "$DATA_PATH" ]; then
mkdir -p $DATA_PATH
fi
dev_appserver.py --storage_path=$DATA_PATH/storage --blobstore_path=$DATA_PATH/blobstore --datastore_path=$DATA_PATH/datastore --search_indexes_path=$DATA_PATH/searchindexes --show_mail_body=yes --clear_search_indexes --clear_datastore .
You can then just use requests to test ala curl:
requests.get('http://localhost:8080/_ah/...')
If you don't want to test the full HTTP stack as described by Ezequiel Muns, you can also just mock out endpoints.method and test your API definition directly:
def null_decorator(*args, **kwargs):
def decorator(method):
def wrapper(*args, **kwargs):
return method(*args, **kwargs)
return wrapper
return decorator
from google.appengine.api.users import User
import endpoints
endpoints.method = null_decorator
# decorator needs to be mocked out before you load you endpoint api definitions
from mymodule import api
class FooTest(unittest.TestCase):
def setUp(self):
self.api = api.FooService()
def test_bar(self):
# pass protorpc messages directly
self.api.foo_bar(api.MyRequestMessage(some='field'))
My solution uses one dev_appserver instance for the entire test module, which is faster than restarting the dev_appserver for each test method.
By using Google's Python API client library, I also get the simplest and at the same time most powerful way of interacting with my API.
import unittest
import sys
import os
from apiclient.discovery import build
import dev_appserver
sys.path[1:1] = dev_appserver.EXTRA_PATHS
from google.appengine.tools.devappserver2 import devappserver2
from google.appengine.tools.devappserver2 import python_runtime
server = None
def setUpModule():
# starting a dev_appserver instance for testing
path_to_app_yaml = os.path.normpath('path_to_app_yaml')
app_configs = [path_to_app_yaml]
python_runtime._RUNTIME_ARGS = [
sys.executable,
os.path.join(os.path.dirname(dev_appserver.__file__),
'_python_runtime.py')
]
options = devappserver2.PARSER.parse_args(['--port', '8080',
'--datastore_path', ':memory:',
'--logs_path', ':memory:',
'--skip_sdk_update_check',
'--',
] + app_configs)
global server
server = devappserver2.DevelopmentServer()
server.start(options)
def tearDownModule():
# shutting down dev_appserver instance after testing
server.stop()
class MyTest(unittest.TestCase):
#classmethod
def setUpClass(cls):
# build a service object for interacting with the api
# dev_appserver must be running and listening on port 8080
api_root = 'http://127.0.0.1:8080/_ah/api'
api = 'my_api'
version = 'v0.1'
discovery_url = '%s/discovery/v1/apis/%s/%s/rest' % (api_root, api,
version)
cls.service = build(api, version, discoveryServiceUrl=discovery_url)
def setUp(self):
# create a parent entity and store its key for each test run
body = {'name': 'test parent'}
response = self.service.parent().post(body=body).execute()
self.parent_key = response['parent_key']
def test_post(self):
# test my post method
# the tested method also requires a path argument "parent_key"
# .../_ah/api/my_api/sub_api/post/{parent_key}
body = {'SomeProjectEntity': {'SomeId': 'abcdefgh'}}
parent_key = self.parent_key
req = self.service.sub_api().post(body=body,parent_key=parent_key)
response = req.execute()
etc..
After digging through the sources, I believe things have changed in endpoints since Ezequiel Muns's (excellent) answer in 2014. For method 1 you now need to request from /_ah/api/* directly and use the correct HTTP method instead of using the /_ah/spi/* transformation. This makes the test file look like this:
from google.appengine.ext import testbed
import webtest
# ...
def setUp(self):
tb = testbed.Testbed()
# Setting current_version_id doesn't seem necessary anymore
tb.activate()
tb.init_all_stubs()
self.testbed = tb
def tearDown(self):
self.testbed.deactivate()
def test_endpoint_insert(self):
app = endpoints.api_server([TestEndpoint]) # restricted is no longer required
testapp = webtest.TestApp(app)
msg = {...} # a dict representing the message object expected by insert
# To be serialised to JSON by webtest
resp = testapp.post_json('/_ah/api/test/v1/insert', msg)
self.assertEqual(resp.json, {'expected': 'json response msg as dict'})
For searching's sake, the symptom of using the old method is endpoints raising a ValueError with Invalid request path: /_ah/spi/whatever. Hope that saves someone some time!

Categories