I've been working on an AppEngine-based project and I wanted to know if it's possible to ignore a ProtoRPC message field.
With the Java SDK, you can use #ApiResourceProperty to ignore a property (this means it's not contained within the response returned to the browser). However, I have not come across a way of doing this using the Python SDK.
Is there anything like this in the Python SDK?
Thanks, Adil
Nope, unfortunately not (at least not to my knowledge).
Two possible solutions depending on your use-case.
Set field values to None before returning the message in your method. That way they will be skipped/not included in the JSON response.
If your messages are hooked up to datastore models you can use the endpoints-proto-datastore library which allows you to use your ndb models directly in your API methods. Additionally it allows for request_fields and response_fields parameters in the method decorator which will limit the request or response to the specified subset of message/model fields. (internally it creates the necessary message classes for you)
Related
I'm using Django REST framework to serve up JSON content for a website front end. On the back end, I have two Django models, Player and Match, that each reference multiple of the other. A Match contains multiple Players, and a Player contains multiple Matches. This data is originally retrieved from a third-party API.
Matches and Players must be fetched separately from the API, and can only be fetched one at a time. When an object is fetched, its data is converted from the external JSON format into my Django model. At this point, the Match/Player will live forever in Django. The hard part is that I want this external fetching to be seamless. If I query for a player or match and it's in the DB, then just serve what we have there. Otherwise, I want to fetch that object from the external DB.
My question is, does Django provide any convenient way of handling this? Ideally, any query along the lines of Match.objects.get(id=...) will handle this API fallback transparently (I don't mind the fact that this query may take significantly longer in some cases).
If a way is "convenient" depends on your expectations ...
You could create a custom QuerySet where you override the get() method to include your fetch-from-API logic. Then you create a custom manager based on that QuerySet, like the docs show here.
Finally add that custom manager to your model.
See also this question from 2011.
I am in the middle of my personal website development and I am using python to create a "Comment section" which my visitors could leave comments at there in public (which means, everybody can see it, so don't worry about the user name registration things). I already set up the sql database to store those data but only thing I haven't figured out yet was how to get the user input (their comments) from the browser. So, is there any modules in python could do that? (Like, the "Charfield" things in django, but unfortunately I don't use django)
For that you would need a web framework like Bottle or Flask. Bottle is a simple WSGI based web framework for Python.
Using either of these you may write simple REST based APIs, one for set and other for get. The "set" one could accept data from your client side and store it on your database where as your "get" api should return the data by reading it from your DB.
Hope it helps.
Is there any basic example that shows how to get POST parameters from the request in mod python custom handler. I have no trouble in GET requests, where I get my arguments from request.args, BUT if method is POST, request.args is None.
Thanks.
request.args stores query string parameters, as mentioned in the documentation.
If you want to get POST variables, you can always read the body of the request (request.read()) and parse it (urldecode in your case).
But keep in mind that, as mentioned on the official mod_python homepage:
Current State of Mod_Python
Currently mod_python is not under active development. This does not mean that it is "dead" as some people have claimed. It smiply means that the code and the project are mature enough when very little is required to maintain it.
Which means you may be better off using something more modern, like mod_wsgi.
I'm creating a python wrapper for Vimeo API and this is my first time creating a python distribution. I'm having questions with python caching.
I referred this existing python-vimeo wrapper for caching the request token. That guy implemented like this
"""By default, this client will cache API requests for 120 seconds. To
override this setting, pass in a different cache_timeout parameter (in
seconds), or to disable caching, set cache_timeout to 0."""
I'm wondering whether it will create a problem or not. If there is more than one user using that feature for connecting vimeo exactly at the same time, and storing the information like this in the server
return self._cache.setdefault(key, processor(headers, content))
doesn't it create problem(informations will be overwritten in the cache)?
If it creates a problem, could you tell me the best solution? I think It would be storing in the filename with the name of authenticated username. Am I right?
Thanks!
I'm not sure I understand the issue, but you could create a prefixed key where the prefix of the key is the username. So a naive but possibly good approach is to save to the
username+"_"+key
key instead
There most likely wouldn't be any key collisions.
This is a bit of a strange question, I know, but bear with me. We've developed a RESTful platform using Python for one of our iPhone apps. The webapp version has been built using Django, which makes use of this API as well. We were thinking it would be a great idea to use Django's built-in control panel capabilities to help manage the data.
This itself isn't the issue. The problem is that everyone has decided it would be best of the admin center was essentially a client that sits on top of the RESTful platform.
So, my question is, is there a way to manipulate the model layer of Django to access our API directly, rather than communicated directly with the database? The model layer would act as the client passing requests and responses to and from the admin center.
I'm sure this is possible, but I'm not so sure as to where I would start. Any input?
I remember I once thought about doing such thing. At the time, I created a custom Manager using a custom QuerySet. And I overrode some methods such as _filter_or_exclude(), count(), exists(), select_related(), ... and added some properties. It took less than a week to become a total mess that had probably no chance to work one day. So I immediately stopped everything and found a more suitable solution.
If I had to do it once again, I would take a long time to consider alternatives. And if it really sounds like the best thing to do, I'd probably create a custom database backend. This backend would, rather than converting Django ORM queries to SQL queries, convert them to HTTP requests.
To do so, I think the best starting point would be to get familiar with django source code concerning database backends.
I also think there are some important things to consider before starting such development:
Is the API able to handle any Django ORM request? Put another way: Will any Django ORM query be translatable to an API request?
If not, may "untranslatable" queries be safely ignored? For instance, an ORDER BY clause might be safe to ignore. While a GROUP BY clause is very unlikely to be safely dismissed.
If some queries can't be neither translated nor ignored, may them be reasonably emulated. For instance, if your API does not support a COUNT() operation, you could emulate it by getting the whole data and count it in python with len(), but is this reasonable?
If they are still some queries that you won't be able to handle (which is more than likely): Are all "common" queries (in this case, all queries potentially used by Django Admin) covered and will it be possible to upgrade the API if an uncovered case is discovered lately or is introduced in a future version of Django?
According to the use case, there are probably tons of other considerations to take, such as:
the integrity of the data
support of transactions
the timing of a query which will be probably much higher than just querying a local (or even remote) database.