My target was to run a query that involves text search. Below is my attempt:
from pymongo import MongoClient
from datetime import datetime
REMOTE_MONGO_URL = ""
mongo_connection = MongoClient(REMOTE_MONGO_URL)
xml_db = mongo_connection.some_string.some_other_string
#pprint(xml_db.index_information())
a = xml_db.find(
{
"source": "winterfell",
'expiration_date': {
'$gte': datetime.now().strftime('%Y-%m-%d')
},
"$text": {
'$search': "ned cat john arya sansa"
}
},
#hint='red_wedding'
)
print(a.count())
Initially, when I run the query, it just runs for infinite time, and I got the feeling that it's not using the proper index. So, I tried to impose the index with $hint. However, it fails with the message that I can't use $text and $hint together.
So, my plan is to first perform the initial query without text search (so that I can use $hint on it), and run the second query of text search on the result of first query. How can I do it?
Related
I am performing adding tags to my elasticsearch document from python. The current way I am doing is by making scan function call and getting all the documents back and then updating each document with the tag I want to append. The code is following.
query_body = {
"query": {
"match_phrase": {
"message" : "some_keyword"
}
}
}
client_info = Elasticsearch()
index = "some_index"
# getting all the documents that contains some_keyword
result = helpers.scan(client_info, index=tag_info['index'], query=query_body)
Now, after getting all the documents that contain the some_keyword. I run a loop that will go through each document and update the tags field.
The following code gets the already existing tags in old_tags variable if they exits and then append the [tags_to_add] to old_tags otherwise it creates a new list of tags. Next it makes an update call to the elasticsearch instance.
for record in result:
old_tags = (record['_source']['tags'])
if(old_tags):
for tag in tag_info['tags_to_add']:
if tag not in old_tags:
old_tags.append(tag)
bd = {
"doc": {
"tags" : old_tags
}
}
#updates the particular document with new tags
res = client.update(index=tag_info['index'], id=record['_id'], body=bd)
else:
old_tags = []
for tag in tag_info['tags_to_add']:
old_tags.append(tag)
bd = {
"doc": {
"tags" : old_tags
}
}
res = client.update(index=tag_info['index'], id=record['_id'], body=bd)
Now, the problem is it is looping through each document at a time and then makes the update call to elasticsearch, which is okay for few documents but an expensive operation if done for a big document, half a million in my case.
Now, I want to learn if there is a bulk operation I can perform to update tags in bulk to save time.
Any information would be helpful.
Thank you in advance!
I have the below record
{
"title": "Kim floral jacquard minidress",
"designer": "Rotate Birger Christensen"
}
How can I find a record in the collection using an array of values. For example, I have the below array values. Because "title" field contains the "floral" value, the record is selected.
['floral', 'dresses']
The query I am using below doesn't work. :(
queryParam = ['floral', 'dresses']
def get_query(queryParam, gender):
query = {
"gender": gender
}
if (len(queryParam) != 0):
query["title"] = {"$in": queryParam}
return query
products_query = get_query(query, gender)
products = mongo.db.products.find(products_query)
To add to the previous answer, there's a little bit more to do to get this to work in pymongo. You have to use re.compile() to get the regex search to work:
import re
queryParam = [re.compile('floral'), re.compile('dresses')]
Alternatively you could use this approach which removes the need for the $in operator:
import re
queryParam = [re.compile('floral|dresses')]
And once you've done that you don't even need to use re.compile:
queryParam = 'floral|dress'
...
query = {"title": {"$regex": queryParam}}
Take your pick.
You need to do regex search along with $in operator :
db.collectionName.find( { title: { $in: [ /floral/, /dresses/ ] } })
i am using elasticsearch with python as client. I want to query through a list of companies. Say Company field values are
Gokl
Normn
Nerth
Scenario 1(using elasticsearch-dsl python)
s = Search(using=client, index="index-test") \
.query("match", Company="N")
So when i put N in query match i don't get Normn or Nerth. I think its probably because of tokenization based on words.
Scenario 2(using elasticsearch-dsl python)
s = Search(using=client, index="index-test") \
.query("match", Company="Normn")
When i enter Normn i get the output clearly. So how can i make the search active when i enter letter n as in above scenario 1.
I think you are looking for a prefix search. I don't know the python syntax but the direct query would look like this:
GET index-test/_search
{
"query": {
"prefix": {
"company": {
"value": "N"
}
}
}
}
See here for more info.
If I understand correctly you need to query companies started with specific letter
In this case you can use this query
{
"query": {
"regexp": {
"Company": "n.*"
}
}
}
please read query types from here
for this case, you can use the code below:
s = Search(using=client, index="index-test").\
.query("match_phrase_prefix", Company="N")
you can use multi-match query for Company and Another field like this:
s = Search(using=client, index="index-test").\
.query("multi_match", query="N", fields=['Company','Another_field'],type='phrase_prefix')
I am using Facebook AdInsight API using a python script. When I query for a particular date using the following snippet:
start_time = '2012-08-01'
end_time = '2012-08-01'
account1 = AdAccount('act_XXXXXXXX');
params = {
'level' : 'account',
'time_range': {
'since': start_time,
'until': end_time},
'fields' :'spend',
};
insights1 = account1.get_insights(params=params)
print(insights1[-1])
I get a response:
{
"date_start": "2012-08-01",
"date_stop": "2012-08-01",
"spend": 573.22
}
This is the only object in the list. I was wondering is there is good way to parse this output into a single comma separated line.enter code here
Maybe there is a better way to do this. I wrote the first result to an intermediate file and then read file searching spend string and using basic split to get the value.
for line in open(thisfilename+'intermediate.csv'):
if "spend" in line:
print line
print line.split(":")[-1]
That worked the trick for me.
Can someone tell me how to write Python statements that will aggregate (sum and count) stuff about my documents?
SCRIPT
from datetime import datetime
from elasticsearch_dsl import DocType, String, Date, Integer
from elasticsearch_dsl.connections import connections
from elasticsearch import Elasticsearch
from elasticsearch_dsl import Search, Q
# Define a default Elasticsearch client
client = connections.create_connection(hosts=['http://blahblahblah:9200'])
s = Search(using=client, index="attendance")
s = s.execute()
for tag in s.aggregations.per_tag.buckets:
print (tag.key)
OUTPUT
File "/Library/Python/2.7/site-packages/elasticsearch_dsl/utils.py", line 106, in __getattr__
'%r object has no attribute %r' % (self.__class__.__name__, attr_name))
AttributeError: 'Response' object has no attribute 'aggregations'
What is causing this? Is the "aggregations" keyword wrong? Is there some other package I need to import? If a document in the "attendance" index has a field called emailAddress, how would I count which documents have a value for that field?
First of all. I notice now that what I wrote here, actually has no aggregations defined. The documentation on how to use this is not very readable for me. Using what I wrote above, I'll expand. I'm changing the index name to make for a nicer example.
from datetime import datetime
from elasticsearch_dsl import DocType, String, Date, Integer
from elasticsearch_dsl.connections import connections
from elasticsearch import Elasticsearch
from elasticsearch_dsl import Search, Q
# Define a default Elasticsearch client
client = connections.create_connection(hosts=['http://blahblahblah:9200'])
s = Search(using=client, index="airbnb", doc_type="sleep_overs")
s = s.execute()
# invalid! You haven't defined an aggregation.
#for tag in s.aggregations.per_tag.buckets:
# print (tag.key)
# Lets make an aggregation
# 'by_house' is a name you choose, 'terms' is a keyword for the type of aggregator
# 'field' is also a keyword, and 'house_number' is a field in our ES index
s.aggs.bucket('by_house', 'terms', field='house_number', size=0)
Above we're creating 1 bucket per house number. Therefore, the name of the bucket will be the house number. ElasticSearch (ES) will always give a document count of documents fitting into that bucket. Size=0 means to give use all results, since ES has a default setting to return 10 results only (or whatever your dev set it up to do).
# This runs the query.
s = s.execute()
# let's see what's in our results
print s.aggregations.by_house.doc_count
print s.hits.total
print s.aggregations.by_house.buckets
for item in s.aggregations.by_house.buckets:
print item.doc_count
My mistake before was thinking an Elastic Search query had aggregations by default. You sort of define them yourself, then execute them. Then your response can be split b the aggregators you mentioned.
The CURL for the above should look like:
NOTE: I use SENSE an ElasticSearch plugin/extension/add-on for Google Chrome. In SENSE you can use // to comment things out.
POST /airbnb/sleep_overs/_search
{
// the size 0 here actually means to not return any hits, just the aggregation part of the result
"size": 0,
"aggs": {
"by_house": {
"terms": {
// the size 0 here means to return all results, not just the the default 10 results
"field": "house_number",
"size": 0
}
}
}
}
Work-around. Someone on the GIT of DSL told me to forget translating, and just use this method. It's simpler, and you can just write the tough stuff in CURL. That's why I call it a work-around.
# Define a default Elasticsearch client
client = connections.create_connection(hosts=['http://blahblahblah:9200'])
s = Search(using=client, index="airbnb", doc_type="sleep_overs")
# how simple we just past CURL code here
body = {
"size": 0,
"aggs": {
"by_house": {
"terms": {
"field": "house_number",
"size": 0
}
}
}
}
s = Search.from_dict(body)
s = s.index("airbnb")
s = s.doc_type("sleepovers")
body = s.to_dict()
t = s.execute()
for item in t.aggregations.by_house.buckets:
# item.key will the house number
print item.key, item.doc_count
Hope this helps. I now design everything in CURL, then use Python statement to peel away at the results to get what I want. This helps for aggregations with multiple levels (sub-aggregations).
I do not have the rep to comment yet but wanted to make a small fix on Matthew's comment on VISQL's answer regarding from_dict. If you want to maintain the search properties, use update_from_dict rather the from_dict.
According to the Docs , from_dict creates a new search object but update_from_dict will modify in place, which is what you want if Search already has properties such as index, using, etc
So you would want to declare the query body before the search and then create the search like this:
query_body = {
"size": 0,
"aggs": {
"by_house": {
"terms": {
"field": "house_number",
"size": 0
}
}
}
}
s = Search(using=client, index="airbnb", doc_type="sleep_overs").update_from_dict(query_body)