Pymongo how to use full text search - python

I am looking to implement full text search in my python application using pymongo. I have been looking at this question but for some reason I am unable to implement this in my project as I am getting an error no such cmd: text. Can anyone direct me on what I am doing wrong?
Here is my code:
db = client.test
collection = db.videos
def search_for_videos(self, search_text)
self.db.command("text", "videos",
search=search_text,
limit=10)
The collection I am trying to search is called videos however I am not sure if I am putting this in the correct parameter, and I also am not sure if I need the line project={"name": 1, "_id": 0}.
The documentation here I believe is using the mongo shell to execute commands, however I wish to perform this action in my code.
I have looked at using the db.videos.find() function, but cannot seem to implement it correctly either.
How to I use PyMongo Full Text Search from my Python Code?

First be sure that you have a text index created on the field as mentioned here or you can just do it with pymongo too :
collection.create_index([('your field', 'text')])
Using pymongo you can do this to search:
collection.find({"$text": {"$search": your search}})
your function should look like this:
def search_for_videos(search_text):
collection.find({"$text": {"$search": search_text}}).limit(10)
I hope this helps you.

First create a text index based on the field you want to do the search on.
from pymongo import TEXT
db = MongoClient('localhost',port = 27017).DBNAME
db.collection.create_index([('FIELD_NAME',TEXT)],default_language ="english")
once you create the text index use the following query to search text. Depending on the size of your database, it might take long to create the text index.
db.collection.find({"$text": {"$search": your search}})

Related

Highlighting in solr with python

If I want to achieve highlight function of Solr in Django with python, how could it be done by using the package solrpy?
How did solrpy deal with it, as the highlighting results live in a absolute fragment on the SolrResponse object,shown as a dictionary of dictionaries.
What's more, does solrpy still work for more function of solr such as faceting, highlighting and stuff, besides basic query
sc = solr.SolrConnection("http://localhost:8080/solr/cases")
response_c=sc.query('name:*%s'%q+'*',fields='name,decision_date', highlight='name')
print(response_c.results)
for hit in response_c.results:
print(hit)
And why above code does not work to achieve Highlighting?
The highlighting information is stored in a separate entry named highlighting on the response object:
If you pass in `highlight` to the SolrConnection.query call,
then the response object will also have a "highlighting" property,
which will be a dictionary.
That being said, I strongly recommend using pysolr instead of solrpy, as pysolr is maintained by the django-haystack project and has been developed continuously over the last few years compared to solrpy.
Yes. The code below allows highlighting (pysolr, version 3.6.0):
import pysolr
solr = pysolr.Solr('http://localhost:8983/solr/<core/collection>')
results = solr.search('hello', **{
'hl': 'true',
'hl.fragsize': 10,
'hl.field': 'text'
})
for i in results:
print(i)
print(results.highlighting)
results.highlighting field will store the highlighted snippets of the search. Other fields are facets, grouped, hits, spellcheck, stats. See more info at https://github.com/django-haystack/pysolr

Django/Postgres - No function matches the given name and argument types

I'm trying to create a search system in my Django and Postgresql project but I keep running into an error when I try to make a query.
Whenever I try these commands in the shell:
vector = SearchVector('title','tags')
query = SearchQuery('book') | SearchQuery('harry')
My_Library.objects.annotate(similarity=TrigramSimilarity(vector,test),).filter(similarity__gt=0.3).order_by('-similarity')
I get the error:
"No function matches the given name and argument types. You might need to add explicit type casts."
I've been testing other options for a while, and the only way I can successfully pass a search query without an error is by using two strings in the place of query and vector.
My_Library.objects.annotate(similarity=TrigramSimilarity('title','my search query'),).filter(similarity__gt=0.3).order_by('-similarity')
This will successfully pass my search with no error.
Why am I getting this error, and how can I fix it?
I've been basing my code off of this Full Text Search documentation
TrigramSimilarity takes 2 strings as arguments
You're trying to pass it a SearchVector and a SearchQuery.
that won't work
If you want to search by multiple tags, you probably need to aggregate multiple of the similarity queries with a | and then sort on similarity, something like:
from django.db.models import Q
My_Library.objects.annotate(
Q(similarity=TrigramSimilarity('title','my search query'),)) |
Q(similarity=TrigramSimilarity('title','my search query'),))
).filter(similarity__gt=0.3).order_by('-similarity')
More details on Q
https://docs.djangoproject.com/en/1.11/ref/models/querysets/#q-objects

SolrClient python update document

I'm currently trying to create a small python program using SolrClient to index some files.
My need is that I want to index some file content and then add some attributes to enrich the document.
I used the post command line tool to index the files. Then I use a python program trying to enrich documents, something like this:
doc = solr.get('collection', id)
doc['new_attribute'] = 'value'
solr.index_json('collection',json.dumps([doc]))
solr.commit(openSearcher=True)
Problem is that I have the feeling that we lost file content index. If I run a query with a word present in all attributes of the doc, I find it.
If I run a query with a word only in the file, it does not work (it works indexing only the file with post without my update tentative).
I'm not sure to understand how to update the doc keeping the index created by the post command.
I hope I'm clear enough, maybe I misunderstood the way it works...
thanks a lot
If I understand correctly, you want to modify an existing record. You should be able to do something like this without using a solr.get:
doc = [{'id': 'value', 'new_attribute':{'set': 'value'}}]
solr.index_json('collection',json.dumps([doc]))
See also:
https://cwiki.apache.org/confluence/display/solr/Updating+Parts+of+Documents
It has worked for me in this way, it can be useful for someone
from SolrClient import SolrClient
solrConect = SolrClient("http://xx.xx.xxx.xxx:8983/solr/")
doc = [{'id': 'my_id', 'count_related_like':{'set': 10}}]
solrConect.index_json("my_collection", json.dumps(doc) )
solrConect.commit("my_collection", softCommit=True)
Trying with Curl did not change anything. I did it differently so now it works. Instead of adding the file with the post command and trying to modify it afterwards, I read the file in a string and index in a "content" field. It means every document is added in one shot.
The content field is defined as not stored, so I just index it.
It works fine and suits my needs. It's also more simple since it removes many attributes set by post command that I don't need.
If I find some time, I'll try again the partial update and update the post.
Thanks
RĂ©mi

BioPython Pubmed Eutils url?

I'm trying to run some queries against Pubmed's Eutils service. If I run them on the website I get a certain number of records returned, in this case 13126 (link to pubmed).
A while ago I bodged together a python script to build a query to do much the same thing, and the resultant url returns the same number of hits (link to Eutils result).
Of course, not having any formal programming background, it was all a bit cludgy, so I'm trying to do the same thing using Biopython. I think the following code should do the same thing, but it returns a greater number of hits, 23303.
from Bio import Entrez
Entrez.email = "A.N.Other#example.com"
handle = Entrez.esearch(db="pubmed", term="stem+cell[All Fields]",datetype="pdat", mindate="2012", maxdate="2012")
record = Entrez.read(handle)
print(record["Count"])
I'm fairly sure it's just down to some subtlety in how the url is being generated, but I can't work out how to see what url is being generated by Biopython. Can anyone give me some pointers?
Thanks!
EDIT:
It's something to do with how the url is being generated, as I can get back the original number of hits by modifying the code to include double quotes around the search term, thus:
handle = Entrez.esearch(db='pubmed', term='"stem+cell"[ALL]', datetype='pdat', mindate='2012', maxdate='2012')
I'm still interested in knowing what url is being generated by Biopython as it'll help me work out how i have to structure the search term for when i want to do more complicated searches.
handle = Entrez.esearch(db="pubmed", term="stem+cell[All Fields]",datetype="pdat", mindate="2012", maxdate="2012")
print(handle.url)
You've solved this already (Entrez likes explicit double quoting round combined search terms), but currently the URL generated is not exposed via the API. The simplest trick would be to edit the Bio/Entrez/__init__.py file to add a print statement inside the _open function.
Update: Recent versions of Biopython now save the URL as an attribute of the returned handle, i.e. in this example try doing print(handle.url)

Using Python Web GET data

I'm trying to pass information to a python page via the url. I have the following link text:
"<a href='complete?id=%s'>" % (str(r[0]))
on the complete page, I have this:
import cgi
def complete():
form = cgi.FieldStorage()
db = MySQLdb.connect(user="", passwd="", db="todo")
c = db.cursor()
c.execute("delete from tasks where id =" + str(form["id"]))
return "<html><center>Task completed! Click <a href='/chris'>here</a> to go back!</center></html>"
The problem is that when i go to the complete page, i get a key error on "id". Does anyone know how to fix this?
EDIT
when i run cgi.test() it gives me nothing
I think something is wrong with the way i'm using the url because its not getting passed through.
its basically localhost/chris/complete?id=1
/chris/ is a folder and complete is a function within index.py
Am i formatting the url the wrong way?
The error means that form["id"] failed to find the key "id" in cgi.FieldStorage().
To test what keys are in the called URL, use cgi.test():
cgi.test()
Robust test CGI script, usable as main program. Writes minimal HTTP headers and formats all information provided to the script in HTML form.
EDIT: a basic test script (using the python cgi module with Linux path) is only 3 lines. Make sure you know how to run it on your system, then call it from a browser to check arguments are seen on the CGI side. You may also want to add traceback formatting with import cgitb; cgitb.enable().
#!/usr/bin/python
import cgi
cgi.test()
Have you tried printing out the value of form to make sure you're getting what you think you're getting? You do have a little problem with your code though... you should be doing form["id"].value to get the value of the item from FieldStorage. Another alternative is to just do it yourself, like so:
import os
import cgi
query_string = os.environ.get("QUERY_STRING", "")
form = cgi.parse_qs(query_string)
This should result in something like this:
{'id': ['123']}
First off, you should make dictionary lookups via
possibly_none = my_dict.get( "key_name" )
Because this assigns None to the variable, if the key is not in the dict. You can then use the
if key is not None:
do_stuff
idiom (yes, I'm a fan of null checks and defensive programming in general...). The python documentation suggests something along these lines as well.
Without digging into the code too much, I think you should reference
form.get( 'id' ).value
in order to extract the data you seem to be asking for.

Categories