I've been trying to get data from Firebase into my Django app the issue i face is that some of the documents are retrieved and some aren't. A really weird thing I noticed is when on the admin page the documents that can be accessed are highlighted in a darker shade than the ones that we aren't able to get from the database.
The highlighted issue is shown in the image above. The first document is highlighted but the second isn't and the first is read by the django function below
def home(request, user=""):
db = firestore.client()
docs = db.collection(u'FIR_NCR').stream()
for doc in docs:
print(doc.id,end="->")
s = db.collection(u'FIR_NCR').document(u'{}'.format(doc.id)).collection(u'all_data').get()
print(s[0].id,end="->")
print(s[0].to_dict())
return render(request, "home.html", {"user":user})
In this docs is not able to get the complete list of the documents necessary and hence the issue.
It would be wonderful if someone could help me understand what I'm doing wrong. T.I.A.
The document ID isn't actually highlighted. The difference between the first and the second ID is that the second one is in italics. That means there is no actual document with that ID. The reason why the Firestore console shows you a document ID at all for a missing document is because it has a nested subcollection. You can click into that missing document, then again click into the subcollection.
In Firestore, you can have subcollections nested under documents that don't exist. This is OK. Just be aware that these missing documents can't be discovered by a normal query in the collection where you see them in the console.
Related
I would like to know how to create an event with the graph api?
I have tried the following:
def createEvent(self,body):
return requests.post('https://graph.facebook.com/official_events', data=json.dumps(body))
I can retrieve and display my own data.
I can also post contributions on my own the website.
I have difficulties in determining which elements must be included in the content-body and which are variable.
additionally I wonder where I get the information about the place-id and roles":{"{page_id}":"{EVENT_ROLE}"}
I know the information about page_id but where do I get the information about EVENT_ROLE?
And what is it needed for?
Link: https://developers.facebook.com/docs/pages/official-events/getting-started
in the documentation it should look like this, but unfortunately I lack the above mentioned information
I'm using Elasticsearch in python, and I can't figure out how to get the ids of the documents deleted by the delete_by_query() method! By default it only the number of documents deleted.
There is a parameter called _source that if set to True should return the source of the deleted documents. This doesn't happen, nothing changes.
Is there a good way to know which document where deleted?
The delete by query endpoint only returns a macro summary of what happened during the task, mainly how many documents were deleted and some other details.
If you want to know the IDs of the document that are going to be deleted, you can do a search (with _source: false) before running the delete by query operation and you'll get the expected IDs.
I want to get the text of the edit made on a Wikipedia page before and after the edit. I have this url:
https://en.wikipedia.org/w/index.php?diff=328391582&oldid=328391343
But, I want the text in the json format so that I can directly use it in my program. Is there any API provided by MediaWiki that gives me the old and new text after an edit or do I have to parse the HTML page using a parser?
Try this: https://www.mediawiki.org/wiki/API:Revisions
There are a few options which may be of use, such as:
rvparse: Parse revision content. For performance reasons if this option is used, rvlimit is enforced to 1.
rvdifftotext: Text to diff each revision to.
If those fail there's still
rvprop / ids: Get the revid and, from 1.16 onward, the parentid
Then once you get the parent ID, you can compare the text of the two.
Leaving a note in JavaScript, how to query the Wikipedia API to get all the recent edits.
In some cases the article get locked, the recent edits can't be seen.
🔐 This article is semi-protected due to vandalism
Querying the API as follow allow to read all edits.
fetch("https://en.wikipedia.org/w/api.php?action=query&origin=*&prop=revisions&format=json&titles=Timeline_of_the_2020_United_States_presidential_election&rvslots=*&rvprop=timestamp|user|comment|content")
.then(v => v.json()).then((function(v){
main.innerHTML = JSON.stringify(v, null, 2)
})
)
<pre id="main" style="white-space: pre-wrap"></pre>
See also How to get Wikipedia content as text by API?
You can try WikiWho. It tracks every single token written in Wikipedia (with 95% accuracy). In a nutshell, it assigns IDs to every token, and it tracks them based on the context. You just need to check for the existence (or not) of the ID between two revisions (it works even if the revisions are not consecutive).
There is a wrapper and a tutorial. There is a bug in the tutorial because the name of the article change (instead of "bioglass", you should look for "Bioglass_45S5")
You can (sometimes) access the tutorial online:
I am trying to delete a particular row in the datastore google app engine. The list of entries are displayed in the web page and the user clicks a button to delete a particular entry, this should reflect the datastore. From the jinja template i'm passing the ID of the entry when the delete-button is clicked by user. The below python code should handle the delete action in the datastore.
def post(self,id):
q = db.GqlQuery('SELECT * FROM Input WHERE ID=:1', id)
for msg in q:
db.delete(msg) # msg.delete() #I tried these both stil not working
It is not showing any error message to me and shows HTTP 200 message. But when I check the datastore the enrty isn't deleted :(
Please help me to fix this.
I'm guessing that one of two things are happening: One is that id isn't what you expect, and the query isn't returning entities (some logging will suss that out). The other is that you're seeing the effects of "eventual consistency", which is described in some detail here. A test for that is whether you still see entity that after some time has elapsed. The fix for the second problem is to deleted entities from within a transaction.
I have been reading up more on CouchDB and really like it (master:master replication is one of the primary reasons I want to use it).
However, I have a query to ask of you guys... I cam from php, and used the Drupal CMS fairly often. One of my favorite (probably of the drupal community as a whole) was the 'Views' plugin written by MerlinOfChaos. The idea is that an admin can use the views ui system, to create a dynamic stream of content from the database. This content could be from any content type (blog posts, articles, users, image, et. al.) and could be filtered, ordered, arranged in grids, and so on. One simple example is creating a source of content for a animating slider. Where the admin could go in at any time and change what is shown in there. Though typically I would set it up as the most 5 recent of content type X.
So with something like mongo, I could kinda see how to could do this. A fairly advanced parser that would then convert what the admin wants into a db query. Since mongo is all based on dynamic querying, it is very doable. However, I want to use couch.
I have seen that I can create a view that takes a parameter and will return results based on that (such as a parameter of the 5 article id's you want displayed). But what if I want to be able to build something more advanced from the UI? would I just add more parameters? For example, say the created view selects all documents with the value 'contentType' = 'post' and the argument is the id/page title. But what if I want the end user to also be able to choose the content type that the view queries against. Or the 5 most recent articles as long as the content type is one of 3 different values?
Another thing this makes me think of, is once a view like this is created and saved to the db, and called for the first time, it spends the time to build the results. Would you do this on a production/live system?
Part of the idea is that I want an end user to be able to create a custom feed of content on their profile page based on articles and posts on the site. and to be able to filter them and make their own categories, so to speak and label them. Such as their 'tech' feed, and their 'food' feed.
I am still new to couch and still have reading to do. But this is something that was buggins me and I am trying to wrap my head around it. Since the product I have in mind is going to be heavily dynamic based on the end users input.
The application itself will be written in python
In a nutshell, you would need to emit something like this in the view:
emit([doc.contentType, doc.addDate], doc); // emit the entire doc,
// add date is timestamp (assuming)
or
emit([doc.contentType, doc.addDate], null); // use with include_docs=true
Then, when you need to fetch the listing:
startkey=["post",0]&endkey=["post",999999999]&limit=5&descending=true
Explain:
startkey = ["post",0] = contentType is post, and addDate >= 0
endkey = ["post",9999999999] = contentType is post, and addDate <= 9999999999
limit = 5, limit to five posts
descending = true = sort descending, which is sort by adddDate descending
To overcome the drawback of updating views on live db,
you can also create a new design(view) doc.
So, at least your existing code and view won't be affected.
Only after your new view is created,
you deploy the latest code to switch to this new view,
and you can retire the older view.