I made an Atom feed in Django using a class that looks something like this:
class AtomFeed(Feed):
feed_type = feedgenerator.Atom1Feed
# ...
def item_pubdate(self, post):
return datetime.datetime(post.date.year, post.date.month, post.date.day)
The resulting XML for an item:
<entry>
<title>..</title>
<link href="..." rel="alternate"></link>
<updated>2010-10-18T00:00:00+02:00</updated>
<author><name>...</name></author>
<id>...</id>
<summary type="html">...</summary>
</entry>
The thing to note here is that the date goes in the atom:updated element, not the atom:published element.
The RFC clearly suggests to me that this is not the intended usage:
The "atom:updated" element is a Date construct indicating the most recent instant in time when an entry or feed was modified in a way the publisher considers significant. Therefore, not all modifications necessarily result in a changed atom:updated value.
Whereas:
The "atom:published" element is a Date construct indicating an instant in time associated with an event early in the life cycle of the entry.
This is more than just a theoretical problem. Google Reader, for example, does not seem to use the updated element, and uses the date that it first saw the item appear. As a result, it does not order the items properly upon first import of the feed.
The code in Django responsible for this:
django/utils/feedgenerator.py:331
if item['pubdate'] is not None:
handler.addQuickElement(u"updated", rfc3339_date(item['pubdate']).decode('utf-8'))
There appears to be no mention of the published element.
Is this a bug in Django? Am I misunderstanding the Atom RFC? Am I missing something else?
You are not missing anything. The Atom RFC is correct, and this is a known bug in Django; see this Django bug.
It looks like a relatively simple fix, so feel free to get in there and patch it! ^_^
Related
I am new to use bioservices Python package. Now I am going to use that to retrieve PMIDs for two citations, given the specified information and this is the code I have tried:
from bioservices import EUtils
s = EUtils()
print(s.ECitMatch("pubmed",retmode="xml", bdata="proc+natl+acad+sci+u+s+a|1991|88|3248|mann+bj|Art1|%0Dscience|1987|235|182|palmenberg+ac|Art2|"))
But it occurs an error:
"TypeError: ECitMatch() got multiple values for argument 'bdata'".
Could anyone help me to solve that problem?
I think the issue is that you have an unnamed argument (pubmed); if you look at the source code, you can see that the first argument should be bdata; if you provide the arguments like you do, it is, however, unclear whether bdata is "pubmed" or the named argument bdata, therefore the error you obtain.
You can reproduce it with this minimal example:
def dummy(a, b):
return a, b
dummy(10, a=3)
will return
TypeError: dummy() got multiple values for argument 'a'
If you remove "pubmed", the error disappears, however, the output is still incomplete:
from bioservices import EUtils
s = EUtils()
print(s.ECitMatch("proc+natl+acad+sci+u+s+a|1991|88|3248|mann+bj|Art1|%0Dscience|1987|235|182|palmenberg+ac|Art2|"))
returns
'proc+natl+acad+sci+u+s+a|1991|88|3248|mann+bj|Art1|2014248\n'
so only the first publication is taken into account. You can get the results for both by using the correct carriage return character \r:
print(s.ECitMatch(bdata="proc+natl+acad+sci+u+s+a|1991|88|3248|mann+bj|Art1|\rscience|1987|235|182|palmenberg+ac|Art2|"))
will return
proc+natl+acad+sci+u+s+a|1991|88|3248|mann+bj|Art1|2014248
science|1987|235|182|palmenberg+ac|Art2|3026048
I think you neither have to specify retmod nor the database (pubmed); if you look at the source code I linked above you can see:
query = "ecitmatch.cgi?db=pubmed&retmode=xml"
so seems it always uses pubmed and xml.
Two issues here: syntaxic and a bug.
The correct syntax is:
from bioservices import EUtils
s = EUtils()
query = "proc+natl+acad+sci+u+s+a|1991|88|3248|mann+bj|Art1|%0Dscience|1987|235|182|palmenberg+ac|Art2|"
print(s.ECitMatch(query))
Indeed, the underlying service related to ICitMatch has only one database (pubmed) and one format (xml) hence, those 2 parameters are not available : there are hard-coded. Therefore, only one argument is required: your query.
As for the second issue, as pointed above and reported on the bioservices issues page, your query would return only one publication. This was an issue with the special character %0D (in place of a return carriage) not being interpreted corectly by the URL request. This carriage character (either \n, \r or %0d) is now taken into account in the latest version on github or from pypi website if you use version 1.7.5
Thanks to willigot for filling the issue on bioservices page and bringing it to my attention.
disclaimer: i'm the main author of bioservices
I am using xpath in Python 2.7 with lxml:
from lxml import html
...
tree = html.fromstring(source)
results = tree.xpath(...xpath string...)
Now the problem is the xpath string and am getting quite lost in this. I am trying to get all the nodes from one path like this:
//a[#class="hyperlinkClass"]/span/text() (1)
There are no missing entries in this part and this works fine. But I'm also trying to get a part relative to this as well, like so:
//a[#class="hyperlinkClass"]/span/following-sibling::div[#class="divClassName"]/span[#class="spanClassName"]/text() (2)
This works fine by itself, but (2) may or may not have nodes for each node in (1). What I would like to do is to have a default value for if (2) is missing/empty for each (1), say "absent". This sounds straightforward and maybe it is, but I'm hitting a brick wall here.
By doing '(1) | (2)' I get all the values needed, but no way to match them. If I do '(1) | concat((2), "absent")', this doesn't work either - concat doesn't seem to work in python, though I've read with xpath that it is valid. I saw here the "Becker method", but that doesn't work either (or I can't get it to).
Hopefully, someone can shine a light on how to get this working or if it's even possible.
Don't make this more complicated than it is:
path1 = '//a[#class="hyperlinkClass"]/span'
path2 = './following-sibling::div[#class="divClassName"]/span[#class="spanClassName"]'
for link in tree.xpath(path1):
other_node = link.xpath(path2)
if len(other_node):
print(link.text, other_node[0].text)
else:
print(link.text, 'n/a')
I have a database with a bunch of regular documents that look something like this (example from wiki):
{
"_id":"some_doc_id",
"_rev":"D1C946B7",
"Subject":"I like Plankton",
"Author":"Rusty",
"PostedDate":"2006-08-15T17:30:12-04:00",
"Tags":["plankton", "baseball", "decisions"],
"Body":"I decided today that I don't like baseball. I like plankton."
}
I'm working in Python with couchdb-python and I want to know if it's possible to add a field to each document. For example, if I wanted to have a "Location" field or something like that.
Thanks!
Regarding IDs
Every document in couchdb has an id, whether you set it or not. Once the document is stored you can access it through the doc._id field.
If you want to set your own ids you'll have to assign the id value to doc._id. If you don't set it, then couchdb will assign a uuid.
If you want to update a document, then you need to make sure you have the same id and a valid revision. If say you are working from a blog post and the user adds the Location, then the url of the post may be a good id to use. You'd be able to instantly access the document in this case.
So what's a revision
In your code snippet above you have the doc._rev element. This is the identifier of the revision. If you save a document with an id that already exists, couchdb requires you to prove that the document is still the valid doc and that you are not trying to overwrite someone else's document.
So how do I update a document
If you have the id of your document, you can just access each document by using the db.get(id) function. You can then update the document like this:
doc = db.get(id)
doc['Location'] = "On a couch"
db.save(doc)
I have an example where I store weather forecast data. I update the forecasts approximately every 2 hours. A separate process is looking for data that I get from a different provider looking at characteristics of tweets on the day.
This looks something like this.
doc = db.get(id)
doc_with_loc = GetLocationInformationFromOtherProvider(doc) # takes about 40 seconds.
doc_with_loc["_rev"] = doc["_rev"]
db.save(doc_with_loc) # This will fail if weather update has also updated the file.
If you have concurring processes, then the _rev will become invalid, so you have to have a failsave, eg. this could do:
doc = db.get(id)
doc_with_loc = GetLocationInformationFromAltProvider(doc)
update_outstanding = true
while update_outstanding:
doc = db.get(id) //reretrieve this to get
doc_with_loc["_rev"] = doc["_rev"]
update_outstanding = !db.save(doc_with_loc)
So how do I get the Ids?
One option suggested above is that you actively set the id, so you can retrieve it. Ie. if a user sets a given location that is attached to a URL, use the URL. But you may not know which document you want to update - or even have a process that finds all the document that don't have a location and assign one.
You'll most likely be using a view for this. Views have a mapper and a reducer. You'll use the first one, forget about the last one. A view with a mapper does the following:
It returns a simplyfied/transformed way of looking at your data. You can return multiple values per data or skip some. It gives the data you emit a key, and if you use the _include_docs function it will give you the document (with _id and rev alongside).
The simplest view is the default view db.view('_all_docs') this will return all documents and you may not want to update all of them. Views for example will be stored as a document as well when you define these.
The next simple way is to have view that only returns items that are of the type of the document. I tend to have a _type="article in my database. Think of this as marking that a document belongs to a certain table if you had stored them in a relational database.
Finally you can filter elements that have a location so you'd have a view where you can iterate over all those docs that still need a location and identify this in a separate process. The best documentation on writing view can be found here.
I'm trying to run some queries against Pubmed's Eutils service. If I run them on the website I get a certain number of records returned, in this case 13126 (link to pubmed).
A while ago I bodged together a python script to build a query to do much the same thing, and the resultant url returns the same number of hits (link to Eutils result).
Of course, not having any formal programming background, it was all a bit cludgy, so I'm trying to do the same thing using Biopython. I think the following code should do the same thing, but it returns a greater number of hits, 23303.
from Bio import Entrez
Entrez.email = "A.N.Other#example.com"
handle = Entrez.esearch(db="pubmed", term="stem+cell[All Fields]",datetype="pdat", mindate="2012", maxdate="2012")
record = Entrez.read(handle)
print(record["Count"])
I'm fairly sure it's just down to some subtlety in how the url is being generated, but I can't work out how to see what url is being generated by Biopython. Can anyone give me some pointers?
Thanks!
EDIT:
It's something to do with how the url is being generated, as I can get back the original number of hits by modifying the code to include double quotes around the search term, thus:
handle = Entrez.esearch(db='pubmed', term='"stem+cell"[ALL]', datetype='pdat', mindate='2012', maxdate='2012')
I'm still interested in knowing what url is being generated by Biopython as it'll help me work out how i have to structure the search term for when i want to do more complicated searches.
handle = Entrez.esearch(db="pubmed", term="stem+cell[All Fields]",datetype="pdat", mindate="2012", maxdate="2012")
print(handle.url)
You've solved this already (Entrez likes explicit double quoting round combined search terms), but currently the URL generated is not exposed via the API. The simplest trick would be to edit the Bio/Entrez/__init__.py file to add a print statement inside the _open function.
Update: Recent versions of Biopython now save the URL as an attribute of the returned handle, i.e. in this example try doing print(handle.url)
Even with all I do know about the AppEngine datastore, I don't know the answer to this. I'm trying to avoid having to write and run all the code it would take to figure it out, hoping someone already knows the answer.
I have code like:
class AddlInfo(db.Model)
user = db.ReferenceProperty(User)
otherstuff = db.ListProperty(db.Key, indexed=False)
And create the record with:
info = AddlInfo(user=user)
info.put()
To get this object I can do something like:
# This seems excessively wordy (even though that doesn't directly translate into slower)
info = AddlInfo.all().filter('user =', user).fetch(1)
or I could do something like:
class AddlInfo(db.Model)
# str(user.key()) is the key to this record
otherstuff = db.ListProperty(db.Key, indexed=False)
Creation looks like:
info = AddlInfo(key_name=str(user.key()))
info.put()
And then get the info with:
info = AddlInfo.get(str(user.key()))
I don't need the reference_property in the AddlInfo, (I got there using the user object in the first place). Which is faster/less resource intensive?
==================
Part of why I was doing it this way is that otherstuff could be a list of 100+ keys and I only need them sometimes (probably less than 50% of the time) I was trying to make it more efficient by not having to load those 100+ keys on every request.....
Between those 2 options, the second is marginally cheaper, because you're determining the key by inference rather than looking it up in a remote index.
As Wooble said, it's cheaper still to just keep everything on one entity. Consider an Expando if you just need a way to store a bunch of optional, ad-hoc properties.
The second approach is the better one, with one modification: There's no need to use the whole key of the user as the key name of this entity - just use the same key name as the User record.