currently when I DELETE an item which is being referenced by. I can still delete it. I would like to prohibit this giving an error. I couldn't find anything on http://docs.python-eve.org or google.
say I have the following two schemas:
foos = {
...
'schema' : {
...,
'bar': {
'type': 'string',
'required': true,
'data_relation': {
'resource': 'bars',
'field': 'name',
'embeddable': True
}
}
}
}
bars = {
...
'schema' : {
...,
'name': {
'type': 'string',
'required': true,
}
}
}
if I delete now an item from bars which is still being referenced by >=1 items in foos. How could I return an error instead of deleting the bars item?
Would i need to write a
on_delete_item
def event(resource_name, item)
and how? Any help is appreciated.
Related
Below is my dictionary
abc = [
{'id':"1", 'name': 'cristiano ronaldo', 'description': 'portugal#fifa.com'},
{'id':"2", 'name': 'lionel messi', 'description': 'argentina#fifa.com'},
{'id':"3", 'name': 'Lionel Jr', 'description': 'brazil#fifa.com'}
]
Ingested the players into elasticsearch
for i in abc:
es.index(index="players", body=i, id=i['id'])
Below is the dsl query
resp = es.search(index="players",body={
"query": {
"query_string": {
"fields": ["id^12","description^2", "name^2"],
"query": "brazil#fifa.com"
}
}})
resp
Issue 1:
if "fields": ["id^12","description^2", "name^2"] then i am getting error RequestError: RequestError(400, 'search_phase_execution_exception', 'failed to create query: For input string: "brazil#fifa.com"'
Issue 2:
if my fields are ["description^2", "name^2"] I am expecting one document which contain brazil#fifa.com but returning all 3 documents
Edited: From the comment of sagar my setting id was long which i changed now . mapping is below. and issue1 is resolved
{'players': {'mappings': {'properties': {'description': {'type': 'text',
'fields': {'keyword': {'type': 'keyword', 'ignore_above': 256}}},
'id': {'type': 'text',
'fields': {'keyword': {'type': 'keyword', 'ignore_above': 256}}},
'name': {'type': 'text',
'fields': {'keyword': {'type': 'keyword', 'ignore_above': 256}}}}}}}
Issue 1: if "fields": ["id^12","description^2", "name^2"] then i am
getting error RequestError: RequestError(400,
'search_phase_execution_exception', 'failed to create query: For input
string: "brazil#fifa.com"'
Above issue you are getting because your id field is define as integer or flot type of field (other then text type of field). You need to provide "lenient": true in your query and it will not return any exception.
Issue 2: if my fields are ["description^2", "name^2"] I am expecting
one document which contain brazil#fifa.com but returning all 3
documents
Above issue is happening because you are searching on text type of field which applied default standard analyzer when you do search.
So when you search brazil#fifa.com then it will create two token brazil and fifa.com. Here, fifa.com is matching in your all 3 documents so it is returning in result. To resolved this issue, you can use description.keyword field.
Below query will resolved your both issue:
{
"query": {
"query_string": {
"lenient": true,
"fields": [
"id^12",
"description.keyword^2",
"name^2"
],
"query": "brazil#fifa.com"
}
}
}
Updated:
Based on comment if you want to search fifa as well then you need to provide description as field but when you search brazil#fifa.com then you need to provide it in double quotes for exact match. Please see below example:
{
"query": {
"query_string": {
"lenient": true,
"fields": [
"id^12",
"description^2",
"name^2"
],
"query": "\"brazil#fifa.com\""
}
}
}
I am working with pymongo and after writing aggregate query
db.collection.aggregate([{'$project': {'Id': '$ResultData.Id','data' : '$Results.Data'}}])
I received the object:
{'data': [{'key': 'valid', 'value': 'true'},
{'key': 'number', 'value': '543543'},
{'key': 'name', 'value': 'Saturdays cx'},
{'key': 'message', 'value': 'it is valid.'},
{'key': 'city', 'value': 'London'},
{'key': 'street', 'value': 'Bigeye'},
{'key': 'pc', 'value': '3566'}],
Is there a way that I can access the values by the key name? Like that '$Results.Data.city' and I will receive London. I would like to do that on the level of MongoDB aggregate query so it means I want to write a query in the way:
db.collection.aggregate([{'$project':
{'Id': '$ResultData.Id',
'data' : '$Results.Data',
'city' : $Results.Data.city',
'name' : $Results.Data.name',
'street' : $Results.Data.street',
'pc' : $Results.Data.pc',
}}])
And receive all the values of provided keys.
Using the $elemMatch projection operator in the following query from mongo shell:
db.collection.find(
{ _id: <some_value> },
{ _id: 0, data: { $elemMatch: { key: "city" } } }
)
The output:
{ "data" : [ { "key" : "city", "value" : "London" } ] }
Using PyMongo (gets the same output):
collection.find_one(
{ '_id': <some_value> },
{ '_id': 0, 'data': { '$elemMatch': { 'key': 'city' } } }
)
Using PyMongo aggregate method (gets the same result):
pipeline = [
{
'$project': {
'_id': 0,
'data': {
'$filter': {
'input': '$data', 'as': 'dat',
'cond': { '$eq': [ '$$dat.key', INPUT_KEY ] }
}
}
}
}
]
INPUT_KEY = 'city'
pprint.pprint(list(collection.aggregate(pipeline)))
Naming the received object "result", if result['data'] always is a list of dictionaries with 2 keys (key and value), you can convert the whole list to a dictionary using keys as keys and values as values. Given that this statement is somewhat confusing, here's the code:
data = {pair['key']: pair['value'] for pair in result['data']}
From here, data['city'] will give you 'London', data['street'] will be 'Bigeye' and so on. Obviously, this assumes that there are no conflicts amoung key values in result['data']. Note that this dictionary will (just as the original result['data']) only contain strings so don't expect data['number'] to be an integer.
Another approach would be to dynamically create an object holding each key-value pair as an attribute, allowing you to use the following syntax: data.city, data.street, ... But this would required more complicated code and is a less common and stable approach.
I have an Eve app publishing a simple read-only (GET) interface. It is interfacing a MongoDB collection called centroids, which has documents like:
[
{
"name":"kachina chasmata",
"location":{
"type":"Point",
"coordinates":[-116.65,-32.6]
},
"body":"ariel"
},
{
"name":"hokusai",
"location":{
"type":"Point",
"coordinates":[16.65,57.84]
},
"body":"mercury"
},
{
"name":"caƱas",
"location":{
"type":"Point",
"coordinates":[89.86,-31.188]
},
"body":"mars"
},
{
"name":"anseris cavus",
"location":{
"type":"Point",
"coordinates":[95.5,-29.708]
},
"body":"mars"
}
]
Currently, (Eve) settings declare a DOMAIN as follows:
crater = {
'hateoas': False,
'item_title': 'crater centroid',
'url': 'centroid/<regex("[\w]+"):body>/<regex("[\w ]+"):name>',
'datasource': {
'projection': {'name': 1, 'body': 1, 'location.coordinates': 1}
}
}
DOMAIN = {
'centroids': crater,
}
Which will successfully answer to requests of the form http://hostname/centroid/<body>/<name>. Inside MongoDB this represents a query like: db.centroids.find({body:<body>, name:<name>}).
What I would like to do also is to offer an endpoint for all the documents of a given body. I.e., a request to http://hostname/centroids/<body> would answer the list of all documents with body==<body>: db.centroids.find({body:<body>}).
How do I do that?
I gave a shot by including a list of rules to the DOMAIN key centroids (the name of the database collection) like below,
crater = {
...
}
body = {
'item_title': 'body craters',
'url': 'centroids/<regex("[\w]+"):body>'
}
DOMAIN = {
'centroids': [crater, body],
}
but didn't work...
AttributeError: 'list' object has no attribute 'setdefault'
Got it!
I was assuming the keys in the DOMAIN structure was directly related to the collection Eve was querying. That is true for the default settings, but it can be adjusted inside the resources datasource.
I figured that out while handling an analogous situation as that of the question: I wanted to have an endpoint hostname/bodies listing all the (unique) values for body in the centroids collection. To that, I needed to set an aggregation to it.
The following settings give me exactly that ;)
centroids = {
'item_title': 'centroid',
'url': 'centroid/<regex("[\w]+"):body>/<regex("[\w ]+"):name>',
'datasource': {
'source': 'centroids',
'projection': {'name': 1, 'body': 1, 'location.coordinates': 1}
}
}
bodies = {
'datasource': {
'source': 'centroids',
'aggregation': {
'pipeline': [
{"$group": {"_id": "$body"}},
]
},
}
}
DOMAIN = {
'centroids': centroids,
'bodies': bodies
}
The endpoint, for example, http://127.0.0.1:5000/centroid/mercury/hokusai give me the name, body, and coordinates of mercury/hokusai.
And the endpoint http://127.0.0.1:5000/bodies, the list of unique values for body in centroids.
Beautiful. Thumbs up to Eve!
I use elasticsearch python api to create mappings, but it went some wrong:
es = Elasticsearch("localhost:9200")
request_body = {
"settings": {
"number_of_shards": 5,
"number_of_replicas": 1
},
'mappings': {
'examplecase': {
'properties': {
'tbl_id': {'index': 'not_analyzed', 'type': 'string'},
'texts': {'index': 'analyzed', 'type': 'string'},
}
}
}
}
es.indices.create(index='example_index', body=request_body)
it shows elasticsearch.exceptions.RequestError: RequestError(400, 'mapper_parsing_exception', 'No handler for type [string] declared on field [texts]'), and I find some solution that they say: use text instead of string in the field type, but it also went wrong: elasticsearch.exceptions.RequestError: RequestError(400, 'mapper_parsing_exception', 'Failed to parse mapping [examplecase]: Could not convert [texts.index] to boolean'). The elasticsearch version iselasticsearch-6.5.4. How can I deal with it?
this
'index': 'analyzed' OR 'index': 'not_analyzed'
is an older elasticsearch version mapping and not needed.
All you need to do is use 'text' for analyzed string fields and 'keyword' for not_analyzed text fields, like this:
es = Elasticsearch("localhost:9200")
request_body = {
"settings": {
"number_of_shards": 5,
"number_of_replicas": 1
},
'mappings': {
'examplecase': {
'properties': {
'tbl_id': {'type': 'keyword'},
'texts': {'type': 'text'},
}
}
}
}
es.indices.create(index='example_index', body=request_body)
see reference in Elastic docs here: https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html
See this. The index setting in your mapping is incorrectly configured. It is a mapping parameter and can be set to true or false only. You cannot set this within the properties parameter.
I have the following resource defined:
item = {
'wrapper': {
'type': 'dict',
'schema': {
'element': {
'type': 'objectid',
'data_relation': {
'resource': 'code',
'field': '_id',
'embeddable': True,
},
},
},
},
}
When I try to query using the objectid, I get empty list.
http://127.0.0.1:5000/item?where={"wrapper.element":"5834987589b0dc353b72c27d"}
5834987589b0dc353b72c27d is the valid _id for the element.
If I move the data relation out of the embedded document I can query it as expected
Is there anyway to do this with an embedded data relation?
I have just tested with eve==0.7.1 and it works as expected by filtering with ?where={"wrapper.element" : "<your_objectid>"}, as you said.
I had a problem where the _id was being stored as a string rather than an ObjectId(), this broke the query