I am building a straight-forward Flask API. After each decorator for the API endpoint, I have to define a function that simply calls another function I have in a separate file. This works fine, but seems redundant. I would rather just call that pre-defined function directly, instead of having to wrap it within another function right after the decorator. Is this possible?
What I have currently:
import routes.Locations as Locations
# POST: /api/v1/locations
#app.route('/locations', methods=['GET'])
def LocationsRead ():
return Locations.read()
Locations.read() function looks like this:
def read():
return {
'id': 1,
'name': 'READ'
}
What I am hoping to do:
import routes.Locations as Locations
# POST: /api/v1/locations
#app.route('/locations', methods=['GET'])
Locations.read()
The #syntax of decorators is just syntactic sugar for:
def LocationsRead():
return Locations.read()
LocationsRead = app.route('/locations', methods=['GET'])(LocationsRead)
So you could do something like:
LocationsRead = app.route('/locations', methods=['GET'])(Locations.read)
Arguably, that takes a bit longer to understand the intention and it's not that much more terse that your original code.
With exceptions and logging the stack trace, you also lose one level of the stack trace. That will make it hard to identify where and how Locations.read is being added as a route in a flask. The stack trace will jump straight from the flask library to routes.Locations:read. If you want to know how route was configured (eg.what the URL is parameterised with or what methods it works with), then you'll have to know already know which file the "decoration" took place. If you use normal decoration, you'll get a line pointing at the file containing #app.route('/locations', methods=['GET']).
That is, you get a debatable benefit and the potential to make debugging harder. Stick with the # decorator syntax.
Thanks to #Dunes and #RodrigoRodrigues answers, I played around with it more and found that the following works both for endpoints with and without arguments to pass, like an ID. See the code below.
# GET: /api/v1/locations
app.route(basepath + '/locations', methods=['GET'])(Locations.read)
# GET: /api/v1/locations/{id}
app.route(basepath + '/locations/<int:id>', methods=['GET'])(Locations.read)
# POST: /api/v1/locations
app.route(basepath + '/locations', methods=['POST'])(Locations.create)
# PUT: /api/v1/locations/{id}
app.route(basepath + '/locations/<int:id>', methods=['PUT'])(Locations.update)
# DELETE: /api/v1/locations/{id}
app.route(basepath + '/locations/<int:id>', methods=['DELETE'])(Locations.delete)
Now, I doubt this is standard practice, but if someone is looking to reduce the amount of code in their route declarations, this is one way to do it.
Related
This is my first time building out unit tests, and I'm not quite sure how to proceed here. Here's the function I'd like to test; it's a method in a class that accepts one argument, url, and returns one string, task_id:
def url_request(self, url):
conn = self.endpoint_request()
authorization = conn.authorization
response = requests.get(url, authorization)
return response["task_id"]
The method starts out by calling another method within the same class to obtain a token to connect to an API endpoint. Should I be mocking the output of that call (self.endpoint_request())?
If I do have to mock it, and my test function looks like this, how do I pass a fake token/auth endpoint_request response?
#patch("common.DataGetter.endpoint_request")
def test_url_request(mock_endpoint_request):
mock_endpoint_request.return_value = {"Auth": "123456"}
# How do I pass the fake token/auth to this?
task_id = DataGetter.url_request(url)
The code you have shown is strongly dominated by interactions. Which means that there will most likely be no bugs to find with unit-testing: The potential bugs are on the interaction level: You access conn.authorization - but, is this the proper member? And, does it already have the proper representation in the way you need it further on? Is requests.get the right method for the job? Is the argument order as you expect it? Is the return value as you expect it? Is task_id spelled correctly?
These are (some of) the potential bugs in your code. But, with unit-testing you will not be able to find them: When you replace the depended-on components with some mocks (which you create or configure), your unit-tests will just succeed: Lets assume that you have a misconception about the return value of requests.get, namely that task_id is spelled wrongly and should rather be spelled taskId. If you mock requests.get, you would implement the mock based on your own misconception. That is, your mock would return a map with the (misspelled) key task_id. Then, the unit-test would succeed despite of the bug.
You will only find that bug with integration testing, where you bring your component and depended-on components together. Only then you can test the assumptions made in your component against the reality of the other components.
I'm building an API that returns JSON strings. My goal, however, is to have a common wrapper around results that contain various meta data attributes about the returned results, plus the return results.
Total number of results (I don't allow a user to query for more than 1000 at a time, so they need to know if there is more so they can request the next set of results)
Throttle time (tells the user to back off for a period of time before the next request - useful if the API is busy)
Error code/Message
Data user requested
My JSON object would look something like this:
{
'total_results': 1001,
'throttle': 0,
'error_cd': 0,
'message': 'Successful',
'results': [
# Data that is returned; Each end point can return a different "type"
]
}
The goal is to have my end points simply return the data that appears in results (not even in JSON format). My question is how can I provide a wrapper around this?
My initial idea was a decorator of some kind that runs jsonify, but can a decorator run AFTER a function? IE. Can I run the code in my route and THEN run decorator code?
What about just writing a wrapper function? I would probably do something like:
#app.route('/api/blah/')
def my_route():
results = calculate_my_results()
return jsonify(format_api_result(results))
def format_api_result(data):
# add in your extra metadata here, return a dictionary
A function seems to me to be the most straightforward and most flexible way to do what you want. It's a little extra code, but so's a decorator. And while you can certainly do this in a decorator, I don't think it adds much here except complexity.
If you do want to go the decorator route, check out this:
http://www.jeffknupp.com/blog/2013/11/29/improve-your-python-decorators-explained/
for a good explanation of how decorators work and how you control exactly when the wrapped function gets called.
Let me know if I misunderstood what you're trying to do.
I'm learning python from a textbook. This code is for the game Tic-Tac-Toe.
The full source code for the problem:
http://pastebin.com/Tf4KQpnk
The following function confuses me:
def human_move(board, human):
""" Get human move."""
legal = legal_moves(board)
move = None
while move not in legal:
move = ask_number("Where will you move? (0 - 8): ", 0, NUM_SQUARES)
if move not in legal: print "\nThat square is already taken. Choose another.\n"
print "Fine..."
return move
I do not know why the function receives 'human' parameter. It appears to do nothing with it.
def human_move(board, human):
How would I know to send 'human' to this function if I were to write this game from scratch? Because I can't see why it is sent to this function if it isn't used or returned.
The answer: it depends. In your example it seems useless to me, but I haven't checked it in depth.
If you create a function to be used only from your code, it is in fact useless.
def calculate_money(bank_name, my_dog_name):
return Bank(bank_name).money
money = calculate_money('Deutsche bank', 'Ralph')
But if you are working with some kind of API/Contract, the callbacks you specify might accept arguments that are not needed for a certain implementation, but for some others, are necessary.
For instance, imagine that the following function is used in some kind of framework, and you want the framework to show a pop up when the operation is finished. It could look something like this:
def my_cool_callback(names, accounts, context):
# do something blablab
context.show_message('operation finished')
But what if you don't really need the context object in your callback? you have to speficy it anyway for the signature to match... You can't call it pointless because that parameter is used sometimes.
EDIT
Another situation in which it could be useful, would be to loop through a list of functions that have almost the same signature. In that case could be ok also to have extra arguments as "garbage placeholders". Let's say all your functions need 3 arguments in general, but one needs only 2.
OK, I know it's going to be obvious, but I cannot work out how to write a test for an internal function. Here's a trivial piece of code to illustrate the problem.
def high(x, y):
def low(x):
return x*2
return y*low(x)
class TestHigh(unittest.TestCase):
def test_high(self):
self.assertEqual(high(1,2),4)
def test_low(self):
self.assertEqual(low(3),6)
results in
Exception: NameError: global name 'low' is not defined
In the "real" case I want to be able to test the lower level function in isolation to make sure all the paths are exercised, which is cumbersome when testing only from the higher level.
low is nested within the high function, so it's not accessible from outside the function. The equivalent for your function would be high(3,1)
You write tests to ensure that the publicly visible interface performs according to its specification. You should not attempt to write tests for internal functionality that is not exposed.
If you cannot fully test low() through the results of high() then the untested parts of low() cannot matter to anything outside.
BAD: Try making a class and adding the functions as methods (or staticfunctions) to it.
(I'll leave this here as a reference for what NOT to do.)
GOOD: Write module level functions or accept that you can't test it if you nest it.
I want to make sure I got down how to create tasklets and asyncrounous methods. What I have is a method that returns a list. I want it to be called from somewhere, and immediatly allow other calls to be made. So I have this:
future_1 = get_updates_for_user(userKey, aDate)
future_2 = get_updates_for_user(anotherUserKey, aDate)
somelist.extend(future_1)
somelist.extend(future_2)
....
#ndb.tasklet
def get_updates_for_user(userKey, lastSyncDate):
noteQuery = ndb.GqlQuery('SELECT * FROM Comments WHERE ANCESTOR IS :1 AND modifiedDate > :2', userKey, lastSyncDate)
note_list = list()
qit = noteQuery.iter()
while (yield qit.has_next_async()):
note = qit.next()
noteDic = note.to_dict()
note_list.append(noteDic)
raise ndb.Return(note_list)
Is this code doing what I'd expect it to do? Namely, will the two calls run asynchronously? Am I using futures correctly?
Edit: Well after testing, the code does produce the desired results. I'm a newbie to Python - what are some ways to test to see if the methods are running async?
It's pretty hard to verify for yourself that the methods are running concurrently -- you'd have to put copious logging in. Also in the dev appserver it'll be even harder as it doesn't really run RPCs in parallel.
Your code looks okay, it uses yield in the right place.
My only recommendation is to name your function get_updates_for_user_async() -- that matches the convention NDB itself uses and is a hint to the reader of your code that the function returns a Future and should be yielded to get the actual result.
An alternative way to do this is to use the map_async() method on the Query object; it would let you write a callback that just contains the to_dict() call:
#ndb.tasklet
def get_updates_for_user_async(userKey, lastSyncDate):
noteQuery = ndb.gql('...')
note_list = yield noteQuery.map_async(lambda note: note.to_dict())
raise ndb.Return(note_list)
Advanced tip: you can simplify this even more by dropping the #ndb.tasklet decorator and just returning the Future returned by map_async():
def get_updates_for_user_Async(userKey, lastSyncDate):
noteQuery = ndb.gql('...')
return noteQuery.map_async(lambda note: note.to_dict())
This is a general slight optimization for async functions that contain only one yield and immediately return the value yielded. (If you don't immediately get this you're in good company, and it runs the risk to be broken by a future maintainer who doesn't either. :-)