GAE - How Do i edit / update the datastore in python - python

I have this datastore model
class Project(db.Model)
projectname = db.StringProperty()
projecturl = db.StringProperty()
class Task(db.Model)
project = db.ReferenceProperty(Project)
taskname= db.StringProperty()
taskdesc = db.StringProperty()
How do I edit the value of taskname ? say I have task1 and i want to change it to task1-project

oops sorry, Here is the formatted code:
taskkey = self.request.get("taskkey")
taskid = Task.get(taskkey)
query = db.GqlQuery("SELECt * FROM Task WHERE key =:taskid", taskid=taskid)
if query.count() > 0:
task = Task()
task.taskname = "task1-project"
task.put()
by the way, I get it now. I changed the task=Task() into task = query.get() and it worked.
Thanks for helping by the way.

Given an instance t of Task (e.g. from some get operation on the db) you can perform the alteration you want e.g. by t.taskname = t.taskname + '-project' (if what you want is to "append '-project' to whatever was there before). Eventually, you also probably need to .put t back into the store, of course (but if you make multiple changes you don't need to put it back after each and every change -- only when you're done changing it!-).

Probably the easiest way is to use the admin console. Locally it's:
http://localhost:8080/_ah/admin
and if you've uploaded it, it's the dashboard:
http://appengine.google.com/dashboard?&app_id=******
Here's a link:

Related

Celery - how to get task name by task id?

Celery - bottom line: I want to get the task name by using the task id (I don't have a task object)
Suppose I have this code:
res = chain(add.s(4,5), add.s(10)).delay()
cache.save_task_id(res.task_id)
And then in some other place:
task_id = cache.get_task_ids()[0]
task_name = get_task_name_by_id(task_id) #how?
print(f'Some information about the task status of: {task_name}')
I know I can get the task name if I have a task object, like here: celery: get function name by task id?.
But I don't have a task object (perhaps it can be created by the task_id or by some other way? I didn't see anything related to that in the docs).
In addition, I don't want to save in the cache the task name. (Suppose I have a very long chain/other celery primitives, I don't want to save all their names/task_ids. Just the last task_id should be enough to get all the information regarding all the tasks, using .parents, etc)
I looked at all the relevant methods of AsyncResult and AsyncResult.Backend objects. The only thing that seemed relevant is backend.get_task_meta(task_id), but that doesn't contain the task name.
Thanks in advance
PS: AsyncResult.name always returns None:
result = AsyncResult(task_id, app=celery_app)
result.name #Returns None
result.args #Also returns None
Finally found an answer.
For anyone wondering:
You can solve this by enabling result_extended = True in your celery config.
Then:
result = AsyncResult(task_id, app=celery_app)
result.task_name #tasks.add
You have to enable it first in Celery configurations:
celery_app = Celery()
...
celery_app.conf.update(result_extended=True)
Then, you can access it:
task = AsyncResult(task_id, app=celery_app)
task.name
Something like the following (pseudocode) should be enough:
app = Celery("myapp") # add your parameters here
task_id = "6dc5f968-3554-49c9-9e00-df8aaf9e7eb5"
aresult = app.AsyncResult(task_id)
task_name = aresult.name
task_args = aresult.args
print(task_name, task_args)
Unfortunately, it does not work (I would say it is a bug in Celery), so we have to find an alternative. First thing that came to my mind was that Celery CLI has inspect query_task feature, and that hinted me that it would be possible to find task name by using the inspect API, and I was right. Here is the code:
# Since the expected way does not work we need to use the inspect API:
insp = app.control.inspect()
task_ids = [task_id]
inspect_result = insp.query_task(*task_ids)
# print(inspect_result)
for node_name in inspect_result:
val = inspect_result[node_name]
if val:
# we found node that executes the task
arr = val[task_id]
state = arr[0]
meta = arr[1]
task_name = meta["name"]
task_args = meta["args"]
print(task_name, task_args)
Problem with this approach is that it works only while the task is running. The moment it is done you will not be able to use the code above.
This is not very clear from the docs for celery.result.AsyncResult but not all the properties are populated unless you enable result_extended = True as per configuration docs:
result_extended
Default: False
Enables extended task result attributes (name, args, kwargs, worker, retries, queue, delivery_info) to be written to backend.
Then the following will work:
result = AsyncResult(task_id)
result.name = 'project.tasks.my_task'
result.args = [2, 3]
result.kwargs = {'a': 'b'}
Also be aware that the rpc:// backend does not store this data, you will need Redis, or similar. If you are using rpc, even with result_extended = True you will still get None returned.
I found a good answer in this code snippet.
If and when you have an instance of AsyncResult you do not need the task_id, rather you can simply do this:
result # instance of AsyncResult
result_meta = result._get_task_meta()
task_name = result_meta.get("task_name")
Of course this relies on a private method, so it's a bit hacky. I hope celery introduces a simpler way to retrieve this - it's especially useful for testing.

How do I retrieve a path's data from firebase database using python?

I have this firebase database structure
I want to print out the inventory list(Inventory) for each ID under Businesses.
So I tried this code
db = firebase.database()
all_users = db.child("Businesses").get()
for user in all_users.each():
userid = user.key()
inventorydb = db.child("Businesses").child(userid).child("Inventory")
print(inventorydb)
but all I got is this
<pyrebase.pyrebase.Database object at 0x1091eada0>
what am I doing wrong and how can I loop through each Business ID and print out their inventory?
First, you're printing a Database object. You need to get the data still.
You seem to already know how to get that as well as the children. Or you only copied the examples without understanding it...
Either way, you can try this
db = firebase.database()
businesses = db.child("Businesses")
for userid in businesses.shallow().get().each():
inventory = businesses.child(userid).child("Inventory").get()
print( inventory.val() )
On a side note, National_Stock_Numbers looks like it should be a value of the name, not a key for a child

Django model not saving to database

Ok, a project that a small team I am working on is new to django and developing a webapp, when all of a sudden we lost all ability to add a model object into the database. We are all at a complete loss. Below is where we are in debugging currently.
views.py
def postOp(request):
if request.method == 'POST':
operation = request.POST.get("operation","noop")
#Registered user operations
if request.user.is_authenticated():
username = request.session.get("member","Guest")
user = ToolUser.objects.get(name=username)
zipcode = user.location
.
.
#AddTool
if operation == "addTool":
toolName = request.POST.get("toolName","N/A")
toolDesc = request.POST.get("toolDesc","N/A")
print("In addtools")
user.submitTool(toolName, toolDesc)
print("SUBITTED")
return HttpResponseRedirect("tools")
model
def submitTool(self, Nname, Ndescription):
print("IN SUBMIT ")
t = Tool(name=Nname, owner=self.id, shed=self.id, description=Ndescription, currOwner=0, location=self.location)
print("tool made :", t.name, ", ", t.owner, ", ", t.shed, ", ", t.description, \
", ",t.currOwner ,", ", t.location)
t.save()
print("saving tool")
It appears that it gets all the way to the t.save(), then breaks. using a seperate tool to view the database, it is clearly not getting saved to the table. BUT with the following output to the terminal, it does appear to be creating this instance.
terminal output:
In addtools
IN SUBMIT
tool made : tooltest , 2 , 2 , description , 0 , 12345
EDIT: forgot to update this, found the problem, turns out one field was empty, and django refuses to save something that has empty fields.
So wait, you have a model function called saveTool() but you're calling user.savetool (which I presume is django.contrib.auth.user)? If you say that resolves, then fine.
It's probably better to just populate the object in the postOp() function and save it there. If saveTool() were indeed part of the model class you'd be instantiating a model object, and then calling a method to instantiate another function.
My point is that from a style perspective, the code is needlessly complex, or this requires more data to really work out what's happening there.

Updating DataStore JSON values using endpoints (Python)

I am trying to use endpoints to update some JSON values in my datastore. I have the following Datastore in GAE...
class UsersList(ndb.Model):
UserID = ndb.StringProperty(required=True)
ArticlesRead = ndb.JsonProperty()
ArticlesPush = ndb.JsonProperty()
In general what I am trying to do with the API is have the method take in a UserID and a list of articles read (with an article being represented by a dictionary holding an ID and a boolean field saying whether or not the user liked the article). My messages (centered on this logic) are the following...
class UserID(messages.Message):
id = messages.StringField(1, required=True)
class Articles(messages.Message):
id = messages.StringField(1, required=True)
userLiked = messages.BooleanField(2, required=True)
class UserIDAndArticles(messages.Message):
id = messages.StringField(1, required=True)
items = messages.MessageField(Articles, 2, repeated=True)
class ArticleList(messages.Message):
items = messages.MessageField(Articles, 1, repeated=True)
And my API/Endpoint method that is trying to do this update is the following...
#endpoints.method(UserIDAndArticles, ArticleList,
name='user.update',
path='update',
http_method='GET')
def get_update(self, request):
userID = request.id
articleList = request.items
queryResult = UsersList.query(UsersList.UserID == userID)
currentList = []
#This query always returns only one result back, and this for loop is the only way
# I could figure out how to access the query results.
for thing in queryResult:
currentList = json.loads(thing.ArticlesRead)
for item in articleList:
currentList.append(item)
for blah in queryResult:
blah.ArticlesRead = json.dumps(currentList)
blah.put()
for thisThing in queryResult:
pushList = json.loads(thisThing.ArticlesPush)
return ArticleList(items = pushList)
I am having two problems with this code. The first is that I can't seem to figure out (using the localhost Google APIs Explorer) how to send a list of articles to the endpoints method using my UserIDAndArticles class. Is it possible to have a messages.MessageField() as an input to an endpoint method?
The other problem is that I am getting an error on the 'blah.ArticlesRead = json.dumps(currentList)' line. When I try to run this method with some random inputs, I get the following error...
TypeError: <Articles
id: u'hi'
userLiked: False> is not JSON serializable
I know that I have to make my own JSON encoder to get around this, but I'm not sure what the format of the incoming request.items is like and how I should encode it.
I am new to GAE and endpoints (as well as this kind of server side programming in general), so please bear with me. And thanks so much in advance for the help.
A couple things:
http_method should definitely be POST, or better yet PATCH because you're not overwriting all existing values but only modifying a list, i.e. patching.
you don't need json.loads and json.dumps, NDB does it automatically for you.
you're mixing Endpoints messages and NDB model properties.
Here's the method body I came up with:
# get UsersList entity and raise an exception if none found.
uid = request.id
userlist = UsersList.query(UsersList.UserID == uid).get()
if userlist is None:
raise endpoints.NotFoundException('List for user ID %s not found' % uid)
# update user's read articles list, which is actually a dict.
for item in request.items:
userslist.ArticlesRead[item.id] = item.userLiked
userslist.put()
# assuming userlist.ArticlesPush is actually a list of article IDs.
pushItems = [Article(id=id) for id in userlist.ArticlesPush]
return ArticleList(items=pushItems)
Also, you should probably wrap this method in a transaction.

couchdb-python change notifications

I'm trying to use couchdb.py to create and update databases. I'd like to implement notification changes, preferably in continuous mode. Running the test code posted below, I don't see how the changes scheme works within python.
class SomeDocument(Document):
#############################################################################
# def __init__ (self):
intField = IntegerField()#for now - this should to be an integer
textField = TextField()
couch = couchdb.Server('http://127.0.0.1:5984')
databasename = 'testnotifications'
if databasename in couch:
print 'Deleting then creating database ' + databasename + ' from server'
del couch[databasename]
db = couch.create(databasename)
else:
print 'Creating database ' + databasename + ' on server'
db = couch.create(databasename)
for iii in range(5):
doc = SomeDocument(intField=iii,textField='somestring'+str(iii))
doc.store(db)
print doc.id + '\t' + doc.rev
something = db.changes(feed='continuous',since=4,heartbeat=1000)
for iii in range(5,10):
doc = SomeDocument(intField=iii,textField='somestring'+str(iii))
doc.store(db)
time.sleep(1)
print something
print db.changes(since=iii-1)
The value
db.changes(since=iii-1)
returns information that is of interest, but in a format from which I haven't worked out how to extract the sequence or revision numbers, or the document information:
{u'last_seq': 6, u'results': [{u'changes': [{u'rev': u'1-9c1e4df5ceacada059512a8180ead70e'}], u'id': u'7d0cb1ccbfd9675b4b6c1076f40049a8', u'seq': 5}, {u'changes': [{u'rev': u'1-bbe2953a5ef9835a0f8d548fa4c33b42'}], u'id': u'7d0cb1ccbfd9675b4b6c1076f400560d', u'seq': 6}]}
Meanwhile, the code I'm really interested in using:
db.changes(feed='continuous',since=4,heartbeat=1000)
Returns a generator object and doesn't appear to provide notifications as they come in, as the CouchDB guide suggests ....
Has anyone used changes in couchdb-python successfully?
I use long polling rather than continous, and that works ok for me. In long polling mode db.changes blocks until at least one change has happened, and then returns all the changes in a generator object.
Here is the code I use to handle changes. settings.db is my CouchDB Database object.
since = 1
while True:
changes = settings.db.changes(since=since)
since = changes["last_seq"]
for changeset in changes["results"]:
try:
doc = settings.db[changeset["id"]]
except couchdb.http.ResourceNotFound:
continue
else:
// process doc
As you can see it's an infinite loop where we call changes on each iteration. The call to changes returns a dictionary with two elements, the sequence number of the most recent update and the objects that were modified. I then loop through each result loading the appropriate object and processing it.
For a continuous feed, instead of the while True: line use for changes in settings.db.changes(feed="continuous", since=since).
I setup a mailspooler using something similar to this. You'll need to also load couchdb.Session() I also use a filter for only receiving unsent emails to the spooler changes feed.
from couchdb import Server
s = Server('http://localhost:5984/')
db = s['testnotifications']
# the since parameter defaults to 'last_seq' when using continuous feed
ch = db.changes(feed='continuous',heartbeat='1000',include_docs=True)
for line in ch:
doc = line['doc']
// process doc here
doc['priority'] = 'high'
doc['recipient'] = 'Joe User'
# doc['state'] + 'sent'
db.save(doc)
This will allow you access your doc directly from the changes feed, manipulate your data as you see fit, and finally update you document. I use a try/except block on the actual 'db.save(doc)' so I can catch when a document has been updated while I was editing and reload the doc before saving.

Categories