I'd like to read outside the current namespace with something like the following:
some_entity = MyModel.get_by_id(some_id_name, namespace='somenamespace')
but get_by_id doesn't take namespace as a parameter. I get:
TypeError: Unknown configuration option ('namespace')
I've gotten things to work with:
some_entity = ndb.Key(MyModel, some_id_name, namespace='somenamespace').get()
So now I'm just complaining, but I figured others could benefit from this. :) Also, since Guido monitors this, is there a reason for not allowing the namespace option in get_by_id?
EDIT: This is now possible in App Engine 1.7.0.
Please file a feature request in the NDB issue tracker: http://code.google.com/p/appengine-ndb-experiment/issues/list
you could first change the namespace and then you get the entity by_id
from google.appengine.api import namespace_manager
namespace_manager.set_namespace('thenamespace')
MyModel.get_by_id(some_id_name)
Related
# models/__init__.py
from shared.cache import Cache
class modelA():
pass
class modelB():
pass
class modelC():
pass
# shared/cache.py
class Cache:
def methodA():
modelA.SomeStaticMethod()
Basically what I need is to access modelA from inside the Cache class.
If I try to import the models from cache.py, I get an error due to a circular reference error.
I know it seems a little bit weird but it's a very specific issue.
Is there anyway to do that?
You would usually restructure your files so that there is no circular reference error.
Simply answering your question, and usually seen as a workaround, you can import Cache on demand, only within the functions of models/__init__.py that make use of it. This may not be possible in this case, especially if Cache is used as a decorator at the module level.
See also this question.
How can I update the status of an asset in V1 using the Rest API?
I would assume I could do something like this using the Python SDK:
from v1pysdk import V1Meta
v1 = V1Meta()
for s in (v1.PrimaryWorkitem
.filter("Number='D-01240'")):
s.StoryStatus = v1.StoryStatus(134)
v1.commit()
This is at least how I understand the Python SDK examples here:
https://github.com/versionone/VersionOne.SDK.Python
However this does not change anything, even though I have the rights to change the status.
Try using:
s.Status = v1.StoryStatus(134)
According to ~/meta.v1?xsl=api.xsl#PrimaryWorkitem The attribute on PrimaryWorkitem of type StoryStatus is named Status, so I think it's just a mistaken attribute name.
What's probably happening is that you're setting a new attribute on that python object, but since StoryStatus is not one of the setters that the SDK created from the instance schema metadata, it doesn't attempt to add it to the uncommitted data collection, and thus the commit is a no-op and yields neither error nor any action.
It might be possible to fence off arbitrary attribute access on those objects so that misspelled names raise errors. I'll investigate adding that.
Try doing:
s.set(Status = v1.StoryStatus(134))
I want to have dict / list to which I can add values, just like models can be added to the admin register in django !
My attempt : (package -> __init__.py)
# Singleton object
# __init__.py (Package: pack)
class remember:
a = []
def add(data):
a.append[data]
def get():
return a
obj = remember()
# models1.py
import pack
pack.obj.add("data")
# models2.py
import pack
pack.obj.add("data2")
print pack.obj.get()
# We should get: ["data", "data2"]
# We get : ["data2"]
How to achieve the desired functionality ?
Some say that methods can do this if you don't need sub-classing, how to do this with methods ?
Update:
To be more clear :
Just like django admin register any one can import and register itself with admin, so that register is persisted between imports.
If it's a singleton you're after, have a look at this old blog post. It contains a link to a well documented implementation (here).
Don't. If you think you need a global you don't and you should reevaluate how you are approaching the problem because 99% of the time you're doing it wrong.
If you have a really good reason to do it perhaps thread_locals() will really solve the problem you're trying to solve. This allows you to set up thread level global data. Note: This is only slightly better than a true global and should in general be avoided, and it can cause you a lot of headaches.
If you're looking for a cross request "global" then you most likely want to look into storing values in memcached.
In particular I want to use pystache but any guide for another template engine should be good enough to set it up.
If I understood correctly, I have to register the renderer factory in the __init__.py of my pyramid application.
config = Configurator(settings=settings)
config.add_renderer(None, 'pystache_renderer_factory')
Now I need to create the renderer factory and don't know how.
Even though I found the documentation about how to add a template engine, I didn't manage to set it up.
Finally I was able to add the pystache template engine following this guide:
https://groups.google.com/forum/#!searchin/pylons-discuss/add_renderer/pylons-discuss/Y4MoKwWKiUA/cyqldA-vHjkJ
What I did:
created the file mustacherenderer.py:
from pyramid.asset import abspath_from_asset_spec
import pystache
import os
def pystache_renderer_factory(info):
template = os.path.join(abspath_from_asset_spec('myproj:templates', False),
info.name)
f = open(template)
s = f.read()
f.close()
def _render(value, system):
return pystache.render(s, value)
return _render
added this to the __init__.py:
config.add_renderer('.pmt', 'myproj.mustacherenderer.pystache_renderer_factory')
working :)
add_renderer's second argument is supposed to be a class that implements the interface shown in "Adding a New Renderer". Pyramid will take pystache_renderer_factory and attempt to import it, so in your code the line import pystache_renderer_factory would have to work. This example won't ever resolve to a class, only a module or package, so you'll have to fix that first. It should be something like mypackage.pystache_renderer_factory.
The best way to learn how to write a renderer is probably to look at some that have been written already. Specifically the pyramid_jinja2 package, or in Pyramid's source there are very simple implementations of json and jsonp renderers. Notice how they all provide fairly unique ways to implement the required interface. Each factory accepts an info object, and returns a callable that accepts value and system objects.
https://github.com/Pylons/pyramid_jinja2/blob/master/pyramid_jinja2/init.py#L260
https://github.com/Pylons/pyramid/blob/master/pyramid/renderers.py#L135
Note that this answer works well until you create your Pyramid project with a scaffold. Once you do so, this related answer will prove more useful when constructing your Pystache/Mustache_Renderer_Factory: How to integrate pystache with pyramid?.
I'm creating website based on Django (I know it's pure Python, so maybe it could be also answered by people who knows Python well) and I need to call some methods dynamically.
For example I have few applications (modules) in my website with the method "do_search()" in the views.py. Then I have one module called for example "search" and there I want to have an action which will be able to call all the existing "do_search()" in other applications. Of course I don't like to add each application to the import, then call it directly. I need some better way to do it dynamically.
I can read INSTALLED_APPS variable from settings and somehow run through all of the installed apps and look for the specific method? Piece of code will help here a lot :)
Thanks in advance!
Ignas
I'm not sure if I truly understand the question, but please clarify in a comment to my answer if I'm off.
# search.py
searchables = []
def search(search_string):
return [s.do_search(search_string) for s in searchables]
def register_search_engine(searchable):
if hasattr(searchable, 'do_search'):
# you want to see if this is callable also
searchables.append(searchable)
else:
# raise some error perhaps
# views.py
def do_search(search_string):
# search somehow, and return result
# models.py
# you need to ensure this method runs before any attempt at searching can begin
# like in models.py if this app is within installed_apps. the reason being that
# this module may not have been imported before the call to search.
import search
from views import do_search
search.register_search_engine(do_search)
As for where to register your search engine, there is some sort of helpful documentation in the signals docs for django which relates to this.
You can put signal handling and registration code anywhere you like. However, you'll need to make sure that the module it's in gets imported early on so that the signal handling gets registered before any signals need to be sent. This makes your app's models.py a good place to put registration of signal handlers.
So your models.py file should be a good place to register your search engine.
Alternative answer that I just thought of:
In your settings.py, you can have a setting that declares all your search functions. Like so:
# settings.py
SEARCH_ENGINES = ('app1.views.do_search', 'app2.views.do_search')
# search.py
from django.conf import settings
from django.utils import importlib
def search(search_string):
search_results = []
for engine in settings.SEARCH_ENGINES
i = engine.rfind('.')
module, attr = engine[:i], engine[i+1:]
mod = importlib.import_module(module)
do_search = getattr(mod, attr)
search_results.append(do_search(search_string))
return search_results
This works somewhat similar to registering MIDDLEWARE_CLASSES and TEMPLATE_CONTEXT_PROCESSORS. The above is all untested code, but if you look around the django source, you should be able to flesh this out and remove any errors.
If you can import the other applications through
import other_app
then it should be possible to perform
method = getattr(other_app, 'do_' + method_name)
result = method()
However your approach is questionable.