I'm writing a method to update several fields in multiple instances in my database. For now, I'm trying to get it to work just for one.
My user uploads a CSV file with all the information to change (including the pk). I've written the function that parses all the information, and this all works fine. I can even assign the data to an item, and if I print it from that function, it comes out correctly. However, when I save the updates (using item.save()) nothing seems to change in the database.
Here's a very stripped down version of the method. I really don't know why it isn't working. I've done something very similar in other spots (getting data through a form, setting the field, calling save, and then displaying the changed information), and I've used a very similar CSV uploading technique to create new entries.
Small piece of relevant code:
reader = csv.reader(f)
for row in reader:
pk = row[0]
print(pk)
item = POObject.objects.get(pk=pk)
p2 = item.purchase2
print item.purchase.requested_start_date
print p2.requested_start_date
requested_start_date=row[6]
requested_start_date = datetime.datetime.strptime(requested_start_date, "%d %b %y")
print requested_start_date
p2.requested_start_date = requested_start_date
p2.save()
print p2.requested_start_date
item.purchase2 = p2
item.save()
print item.purchase.requested_start_date
return pk
Obviously I have lots of prints in there to find where stuff went wrong. Basically what I find is that if I look at item, it looks fine, but if I query the server again (after saving) i.e. dong item2=POObject.objects.get(pk=pk) it won't have had any updates. Does anyone have any idea why save() isn't doing anything?
UPDATE:
The mystery continues.
If I update a field that isn't contained within an FK relation (say, a text field or something), everything seems to work fine. However, what I really need to do is update an item, and then set that item as the fk relation to the main item in question. I'm not sure why this isn't working in the normal way (updating the internal item, saving it, and then setting the fk to that new, updated item).
Whoa. Feel a little ashamed that I didn't figure this out. I had forgotten exactly how I had designed one of my models, and there was another object within it that needed to be updated, but I wasn't saving it.
Related
Essentially, I'll be using a database of this structure:
to keep track of the users' xp. Under the xp_data section, there will be multiple timestamps and xp numbers for each timestamp. A function will run every 24 hours, that will log the users' XP. I want to have some way to check if the player is already in the database (and if so, add to their existing xp count) and if not, create a new node for them. Here is my code for writing to the server:
db_ref = db.reference('/')
for i in range(100):
tom = await mee6API.levels.get_leaderboard_page(i)
if xp_trigger:
break
this_lb_list = {}
for l in tom['players']:
if l['xp'] < 300:
xp_trigger = True
break
this_lb_list.update({l['id']: {'name': l['username'], 'xp_data': {time.strftime(time_format_str, time.gmtime()): l['xp']}}})
details += [{ int(l['id']) : l['xp']}]
print(i)
db_ref.update(this_lb_list)
Basically, this code loops through each page in the leaderboard, obtains the XP for each user, and appends it to a dict, which is then used to update the database. there are two problems with this code, one is that it does not check if the user already exists, meaning that, and this is the second problem, that it overwrites the user's existing data. I've also attempted to write the data for each player individually, but problem 1 was still an issue, and it was painfully slow. What can I do to rectify this?
When you pass a value for a property in update(), that value replaces the entire existing value of the property in the database. So while update() leaves the properties you don't specify in the call unmodified, it does completely replace any property you do specify.
To add a value to an existing property, you'll want to specify the entire path as the key, separating the various child nodes with /.
So something like:
this_lb_list.update({'xp_data/13-Auth-2021': l['xp']})
This will write only the 13-Auth-2021 of xp_data, leaving all other child nodes of xp_data unmodified.
You'll of course want to use a variable for the date/time, but the important thing is that you specify it in the key, and not in the value of the dictionary.
I'm currently trying to create a small python program using SolrClient to index some files.
My need is that I want to index some file content and then add some attributes to enrich the document.
I used the post command line tool to index the files. Then I use a python program trying to enrich documents, something like this:
doc = solr.get('collection', id)
doc['new_attribute'] = 'value'
solr.index_json('collection',json.dumps([doc]))
solr.commit(openSearcher=True)
Problem is that I have the feeling that we lost file content index. If I run a query with a word present in all attributes of the doc, I find it.
If I run a query with a word only in the file, it does not work (it works indexing only the file with post without my update tentative).
I'm not sure to understand how to update the doc keeping the index created by the post command.
I hope I'm clear enough, maybe I misunderstood the way it works...
thanks a lot
If I understand correctly, you want to modify an existing record. You should be able to do something like this without using a solr.get:
doc = [{'id': 'value', 'new_attribute':{'set': 'value'}}]
solr.index_json('collection',json.dumps([doc]))
See also:
https://cwiki.apache.org/confluence/display/solr/Updating+Parts+of+Documents
It has worked for me in this way, it can be useful for someone
from SolrClient import SolrClient
solrConect = SolrClient("http://xx.xx.xxx.xxx:8983/solr/")
doc = [{'id': 'my_id', 'count_related_like':{'set': 10}}]
solrConect.index_json("my_collection", json.dumps(doc) )
solrConect.commit("my_collection", softCommit=True)
Trying with Curl did not change anything. I did it differently so now it works. Instead of adding the file with the post command and trying to modify it afterwards, I read the file in a string and index in a "content" field. It means every document is added in one shot.
The content field is defined as not stored, so I just index it.
It works fine and suits my needs. It's also more simple since it removes many attributes set by post command that I don't need.
If I find some time, I'll try again the partial update and update the post.
Thanks
RĂ©mi
I have a database with a bunch of regular documents that look something like this (example from wiki):
{
"_id":"some_doc_id",
"_rev":"D1C946B7",
"Subject":"I like Plankton",
"Author":"Rusty",
"PostedDate":"2006-08-15T17:30:12-04:00",
"Tags":["plankton", "baseball", "decisions"],
"Body":"I decided today that I don't like baseball. I like plankton."
}
I'm working in Python with couchdb-python and I want to know if it's possible to add a field to each document. For example, if I wanted to have a "Location" field or something like that.
Thanks!
Regarding IDs
Every document in couchdb has an id, whether you set it or not. Once the document is stored you can access it through the doc._id field.
If you want to set your own ids you'll have to assign the id value to doc._id. If you don't set it, then couchdb will assign a uuid.
If you want to update a document, then you need to make sure you have the same id and a valid revision. If say you are working from a blog post and the user adds the Location, then the url of the post may be a good id to use. You'd be able to instantly access the document in this case.
So what's a revision
In your code snippet above you have the doc._rev element. This is the identifier of the revision. If you save a document with an id that already exists, couchdb requires you to prove that the document is still the valid doc and that you are not trying to overwrite someone else's document.
So how do I update a document
If you have the id of your document, you can just access each document by using the db.get(id) function. You can then update the document like this:
doc = db.get(id)
doc['Location'] = "On a couch"
db.save(doc)
I have an example where I store weather forecast data. I update the forecasts approximately every 2 hours. A separate process is looking for data that I get from a different provider looking at characteristics of tweets on the day.
This looks something like this.
doc = db.get(id)
doc_with_loc = GetLocationInformationFromOtherProvider(doc) # takes about 40 seconds.
doc_with_loc["_rev"] = doc["_rev"]
db.save(doc_with_loc) # This will fail if weather update has also updated the file.
If you have concurring processes, then the _rev will become invalid, so you have to have a failsave, eg. this could do:
doc = db.get(id)
doc_with_loc = GetLocationInformationFromAltProvider(doc)
update_outstanding = true
while update_outstanding:
doc = db.get(id) //reretrieve this to get
doc_with_loc["_rev"] = doc["_rev"]
update_outstanding = !db.save(doc_with_loc)
So how do I get the Ids?
One option suggested above is that you actively set the id, so you can retrieve it. Ie. if a user sets a given location that is attached to a URL, use the URL. But you may not know which document you want to update - or even have a process that finds all the document that don't have a location and assign one.
You'll most likely be using a view for this. Views have a mapper and a reducer. You'll use the first one, forget about the last one. A view with a mapper does the following:
It returns a simplyfied/transformed way of looking at your data. You can return multiple values per data or skip some. It gives the data you emit a key, and if you use the _include_docs function it will give you the document (with _id and rev alongside).
The simplest view is the default view db.view('_all_docs') this will return all documents and you may not want to update all of them. Views for example will be stored as a document as well when you define these.
The next simple way is to have view that only returns items that are of the type of the document. I tend to have a _type="article in my database. Think of this as marking that a document belongs to a certain table if you had stored them in a relational database.
Finally you can filter elements that have a location so you'd have a view where you can iterate over all those docs that still need a location and identify this in a separate process. The best documentation on writing view can be found here.
I use py2neo (v 1.9.2) to write data to a neo4j db.
batch = neo4j.WriteBatch(graph_db)
current_relationship_index = graph_db.get_or_create_index(neo4j.Relationship, "Current_Relationship")
touched_relationship_index = graph_db.get_or_create_index(neo4j.Relationship, "Touched_Relationship")
get_rel = current_relationship_index.get(some_key1, some_value1)
if len(get_rel) == 1:
batch.add_indexed_relationship(touched_relationship_index, some_key2, some_value2, get_rel[0])
elif len(get_rel) == 0:
created_rel = current_relationship_index.create(some_key3, some_value3, (my_start_node, "KNOWS", my_end_node))
batch.add_indexed_relationship(touched_relationship_index, some_key4, "touched", created_rel)
batch.submit()
Is there a way to replace current_relationship_index.get(..) and current_relationship_index.create(...) with a batch command? I know that there is one, but the problem is, that I need to act depending on the return of these commands. And I would like to have all statements in a batch due to performance.
I have read that it is rather uncommon to index relationships but the reason I do it is the following: I need to parse some (text) file everyday and then need to check if any of the relations have changed towards the previous day, i.e. if a relation does not exist in the text file anymore I want to mark it with a "replaced" property in neo4j. Therefore, I add all "touched" relationships to the appropriate index, so I know that these did not change. All relations that are not in the touched_relationship_index obviously do not exist anymore so I can mark them.
I can't think of an easier way to do so, even though I'm sure that py2neo offers one.
EDIT: Considering Nigel's comment I tried this:
my_rel = batch.get_or_create_indexed_relationship(current_relationship_index, some_key, some_value, my_start_node, my_type, my_end_node)
batch.add_indexed_relationship(touched_relationship_index, some_key2, some_value2, my_rel)
batch.submit()
This obviously does not work, because i can't refer to "my_rel" in the batch. How can I solve this? Refer with "0" to the result of the previous batch statement? But consider that the whole thing is supposed to run in a loop, so the numbers are not fixed. Maybe use some variable "batch_counter" which refers to the current batch statement and is always incremented, whenever a statement is added to the batch??
Have a look at WriteBatch.get_or_create_indexed_relationship. That can conditionally create a relationship based on whether or not one currently exists and operates atomically. Documentation link below:
http://book.py2neo.org/en/latest/batches/#py2neo.neo4j.WriteBatch.get_or_create_indexed_relationship
There are a few similar uniqueness management facilities in py2neo that I recently blogged about here that you might want to read about.
Even with all I do know about the AppEngine datastore, I don't know the answer to this. I'm trying to avoid having to write and run all the code it would take to figure it out, hoping someone already knows the answer.
I have code like:
class AddlInfo(db.Model)
user = db.ReferenceProperty(User)
otherstuff = db.ListProperty(db.Key, indexed=False)
And create the record with:
info = AddlInfo(user=user)
info.put()
To get this object I can do something like:
# This seems excessively wordy (even though that doesn't directly translate into slower)
info = AddlInfo.all().filter('user =', user).fetch(1)
or I could do something like:
class AddlInfo(db.Model)
# str(user.key()) is the key to this record
otherstuff = db.ListProperty(db.Key, indexed=False)
Creation looks like:
info = AddlInfo(key_name=str(user.key()))
info.put()
And then get the info with:
info = AddlInfo.get(str(user.key()))
I don't need the reference_property in the AddlInfo, (I got there using the user object in the first place). Which is faster/less resource intensive?
==================
Part of why I was doing it this way is that otherstuff could be a list of 100+ keys and I only need them sometimes (probably less than 50% of the time) I was trying to make it more efficient by not having to load those 100+ keys on every request.....
Between those 2 options, the second is marginally cheaper, because you're determining the key by inference rather than looking it up in a remote index.
As Wooble said, it's cheaper still to just keep everything on one entity. Consider an Expando if you just need a way to store a bunch of optional, ad-hoc properties.
The second approach is the better one, with one modification: There's no need to use the whole key of the user as the key name of this entity - just use the same key name as the User record.