I would like to get every 'ecg_raw' in 'data' itmes that founded by 'name' with python3 pymongo.
If i know 'name' and 'time_info', How can i get 4 ecg_raw datas which time_info == '2018-09-01 00:00:03'
I want to get every ecg_raw data like [[8,2],[1,10],[9,4],[1,9]]
I tried
db.g.find({"data":{"$elemMatch":{"time_info":"2018-09-01 00:00:03"}}},{"name":1,"data":{"$elemMatch":{"time_info":"2018-09-01 00:00:03"}}})
but it returns only one value like bottom.
{'_id': ObjectId('5b90d401219e9c9f72cac8c4'), 'name': 'testDog3', 'data': [{'time_info': '2018-09-01 00:00:03', 'ecg_raw': [8, 2]}]}
Please help me.
> db.g.find().pretty()
{
"_id" : ObjectId("5b90d401219e9c9f72cac8c4"),
"name" : "testDog3",
"data" : [
{
"time_info" : "2018-09-01 00:00:03",
"ecg_raw" : [
8,
2
]
},
{
"time_info" : "2018-09-01 00:00:03",
"ecg_raw" : [
1,
10
]
},
{
"time_info" : "2018-09-01 00:00:03",
"ecg_raw" : [
9,
4
]
},
{
"time_info" : "2018-09-01 00:00:03",
"ecg_raw" : [
1,
9
]
},
{
"time_info" : "2018-09-01 00:00:04",
"ecg_raw" : [
10,
6
]
},
{
"time_info" : "2018-09-01 00:00:04",
"ecg_raw" : [
1,
6
]
}
]
}
try this.
aggregate([{$match:{'name':'testDog3'}},{ "$unwind": "$data" },{ "$match": {"data.time_info":"2018-09-01 00:00:03"}}])
Related
I have a nested JSON-file that looks like this:
[
{
"IsRecentlyVerified": true,
"AddressInfo": {
"Town": "Haarlem",
},
"Connections": [
{
"PowerKW": 17,
"Quantity": 2
}
],
"NumberOfPoints": 1,
},
{
"IsRecentlyVerified": true,
"AddressInfo": {
"Town": "Haarlem",
},
"Connections": [
{
"PowerKW": 17,
"Quantity": 1
},
{
"PowerKW": 17,
"Quantity": 1
},
{
"PowerKW": 17,
"Quantity": 1
}
],
"NumberOfPoints": 1,
}
]
As you can see, the list of this JSON-file consists of two dictionaries that each contains another list (= "Connections") that consists of at least one dictionary. In each dictionary of this JSON-file, I want to select all keys named "Quantity" to make a calculation with its value (so in the example code above, I want to calculate that there are 5 Quantities in total).
With the code below, I created a simple dataframe in Pandas to make this calculation :
import json
import pandas as pd
df = pd.read_json("chargingStations.json")
dfConnections = df["Connections"]
dfConnections = pd.json_normalize(dfConnections)
print(dfConnections)
Which results in:
Ideally, I want to get the "Quantity" key from each dictionary, so that I can make a dataframe like this (where each item has its own row):
However, I am not sure if this is the best way to make my calculation. I tried to get each value of the "Quantity" key by typing dfConnections = dfConnections.get("Quantity"), but that results in None. So: how can I get the value of each "Quantity" key in each dictionary to make my calculation?
If data is parsed Json data from your question, you can do:
df = pd.DataFrame(
[
{
i: sum(dd["Quantity"] for dd in d["Connections"])
for i, d in enumerate(data)
}
]
)
print(df)
Prints:
0
1
0
2
3
you can use json_normalize():
import pandas as pd
true=True
a=[
{
"IsRecentlyVerified": true,
"AddressInfo": {
"Town": "Haarlem",
},
"Connections": [
{
"PowerKW": 17,
"Quantity": 2
}
],
"NumberOfPoints": 1,
},
{
"IsRecentlyVerified": true,
"AddressInfo": {
"Town": "Haarlem",
},
"Connections": [
{
"PowerKW": 17,
"Quantity": 2
}
],
"NumberOfPoints": 1,
},
{
"IsRecentlyVerified": true,
"AddressInfo": {
"Town": "Haarlem",
},
"Connections": [
{
"PowerKW": 17,
"Quantity": 2
}
],
"NumberOfPoints": 1,
},
{
"IsRecentlyVerified": true,
"AddressInfo": {
"Town": "Haarlem",
},
"Connections": [
{
"PowerKW": 17,
"Quantity": 1
},
{
"PowerKW": 17,
"Quantity": 1
},
{
"PowerKW": 17,
"Quantity": 1
},
{
"PowerKW": 17,
"Quantity": 1
},
{
"PowerKW": 17,
"Quantity": 1
}
],
"NumberOfPoints": 1,
}
]
After reading the data, we use a group by function to index numbers and get the sum of the quantity.
df=pd.json_normalize(a)
df=df.explode('Connections')
df=df.join(pd.json_normalize(df.pop('Connections')))
df=df.reset_index().groupby('index')['Quantity'].sum().to_frame()
print(df)
'''
index Quantity
0 0 2
1 1 3
'''
#or another format
df2=df.T
print(df2)
'''
0 1
Quantity 2 3
'''
I have a field in each of my documents like so
'some_field': 3, 5, 10
But each document could have a very length of numbers,
'some_field': 3, 5, 10 # Doc 1
'some_field': 5 # Doc 2
'some_field': 3, 5, 10, 20, 9 # Doc 3
Is there a way to query and sort by the length so that my results would be arranged as so:
'some_field': 3, 5, 10, 20, 9 # Doc 3
'some_field': 3, 5, 10 # Doc 1
'some_field': 5 # Doc 2
My current query, sorting by _id at the moment
es_object.search(index='index', size=500, body={
"sort": [
{"_id": "desc"}
],
"query": {
"bool": {
"must": [
{
"match_all": {}
},
{
"exists": {
"field": "some_field"
}
}
],
"filter": [],
"should": [],
"must_not": []
}
}})
You can do script sort
{
"sort": {
"_script": {
"script": "doc['some_field'].value.length()",
"type": "number",
"order": "asc"
}
},
"query": {
"bool": {
"must": [{
"match_all": {}
},
{
"exists": {
"field": "some_field"
}
}
],
"filter": [],
"should": [],
"must_not": []
}
}
}
I would like to set different "boost terms" according to the year of a publication, example:
"boost_term": 10.0 to produced after 2015
"boost_term": 5.0 to produced between 2010 and 2015
"boost_term": 3.0 to produced between 2010 and 2005
and so on..
Current code:
res = es.search(body={
"query": {
"dis_max": {
"queries": [
{
"more_like_this" : {
"fields": [
"article.name",
"article.year"
],
"like" : {
"_index" : "test-index",
"_type" : "researcher",
"_id" : "idResearcher,
},
"min_term_freq" : 1,
"min_doc_freq": 1,
"boost_terms": 5.0
}
},
]
}
}
})
Try something like:
{
"query": {
"bool": {
"must": [
{
"more_like_this": {
"fields": [
"article.name",
"article.year"
],
"like" : {
"_index" : "test-index",
"_type" : "researcher",
"_id" : "idResearcher",
},
"min_term_freq": 1,
"min_doc_freq": 1
}
}
],
"should": [
{
"range": {
"producedYear" : {
"gte" : "2015",
"boost" : 10.0
}
}
},
{
"range": {
"producedYear" : {
"gte" : "2010",
"lt" : "2015"
"boost" : 10.0
}
}
},{
"range": {
"producedYear" : {
"gte" : "2005",
"lt" : "2010"
"boost" : 3.0
}
}
}
]
}
}
}
I am trying to import my model using this code:
% Number of classes
classnames={'0','1','2','3','4','5','6','7','8'};
% Load model into Matlab
% net = importKerasNetwork(netfile);
netxx = importKerasNetwork('model.json','WeightFile','model.h5', 'classnames', classnames,'OutputLayerType','classification');
and I am getting the following error:
>> load_keras_network_from_py
Error using importKerasNetwork (line 86)
Reference to non-existent field 'class_name'.
Error in load_keras_network_from_py (line 20)
netxx = importKerasNetwork('model.json','WeightFile','model.h5', 'classnames',
classnames,'OutputLayerType','classification');
Here's the structure of my model in JSON that I am trying to import in MATLAB:
{
"class_name":"Sequential",
"config":{
"name":"sequential_1",
"layers":[
{
"class_name":"Conv2D",
"config":{
"name":"conv2d_1",
"trainable":true,
"batch_input_shape":[
null,
128,
128,
3
],
"dtype":"float32",
"filters":32,
"kernel_size":[
3,
3
],
"strides":[
1,
1
],
"padding":"valid",
"data_format":"channels_last",
"dilation_rate":[
1,
1
],
"activation":"relu",
"use_bias":true,
"kernel_initializer":{
"class_name":"VarianceScaling",
"config":{
"scale":1.0,
"mode":"fan_avg",
"distribution":"uniform",
"seed":null
}
},
"bias_initializer":{
"class_name":"Zeros",
"config":{
}
},
"kernel_regularizer":null,
"bias_regularizer":null,
"activity_regularizer":null,
"kernel_constraint":null,
"bias_constraint":null
}
},
{
"class_name":"MaxPooling2D",
"config":{
"name":"max_pooling2d_1",
"trainable":true,
"pool_size":[
2,
2
],
"padding":"valid",
"strides":[
2,
2
],
"data_format":"channels_last"
}
},
{
"class_name":"Conv2D",
"config":{
"name":"conv2d_2",
"trainable":true,
"filters":32,
"kernel_size":[
3,
3
],
"strides":[
1,
1
],
"padding":"valid",
"data_format":"channels_last",
"dilation_rate":[
1,
1
],
"activation":"relu",
"use_bias":true,
"kernel_initializer":{
"class_name":"VarianceScaling",
"config":{
"scale":1.0,
"mode":"fan_avg",
"distribution":"uniform",
"seed":null
}
},
"bias_initializer":{
"class_name":"Zeros",
"config":{
}
},
"kernel_regularizer":null,
"bias_regularizer":null,
"activity_regularizer":null,
"kernel_constraint":null,
"bias_constraint":null
}
},
{
"class_name":"MaxPooling2D",
"config":{
"name":"max_pooling2d_2",
"trainable":true,
"pool_size":[
2,
2
],
"padding":"valid",
"strides":[
2,
2
],
"data_format":"channels_last"
}
},
{
"class_name":"Conv2D",
"config":{
"name":"conv2d_3",
"trainable":true,
"filters":64,
"kernel_size":[
3,
3
],
"strides":[
1,
1
],
"padding":"valid",
"data_format":"channels_last",
"dilation_rate":[
1,
1
],
"activation":"relu",
"use_bias":true,
"kernel_initializer":{
"class_name":"VarianceScaling",
"config":{
"scale":1.0,
"mode":"fan_avg",
"distribution":"uniform",
"seed":null
}
},
"bias_initializer":{
"class_name":"Zeros",
"config":{
}
},
"kernel_regularizer":null,
"bias_regularizer":null,
"activity_regularizer":null,
"kernel_constraint":null,
"bias_constraint":null
}
},
{
"class_name":"MaxPooling2D",
"config":{
"name":"max_pooling2d_3",
"trainable":true,
"pool_size":[
2,
2
],
"padding":"valid",
"strides":[
2,
2
],
"data_format":"channels_last"
}
},
{
"class_name":"Flatten",
"config":{
"name":"flatten_1",
"trainable":true,
"data_format":"channels_last"
}
},
{
"class_name":"Dense",
"config":{
"name":"dense_1",
"trainable":true,
"units":128,
"activation":"relu",
"use_bias":true,
"kernel_initializer":{
"class_name":"VarianceScaling",
"config":{
"scale":1.0,
"mode":"fan_avg",
"distribution":"uniform",
"seed":null
}
},
"bias_initializer":{
"class_name":"Zeros",
"config":{
}
},
"kernel_regularizer":null,
"bias_regularizer":null,
"activity_regularizer":null,
"kernel_constraint":null,
"bias_constraint":null
}
},
{
"class_name":"Dense",
"config":{
"name":"dense_2",
"trainable":true,
"units":1,
"activation":"softmax",
"use_bias":true,
"kernel_initializer":{
"class_name":"VarianceScaling",
"config":{
"scale":1.0,
"mode":"fan_avg",
"distribution":"uniform",
"seed":null
}
},
"bias_initializer":{
"class_name":"Zeros",
"config":{
}
},
"kernel_regularizer":null,
"bias_regularizer":null,
"activity_regularizer":null,
"kernel_constraint":null,
"bias_constraint":null
}
}
]
},
"keras_version":"2.2.4",
"backend":"tensorflow"
}
I've tried several approaches to tackle this problem (includin import of h5 file instead of JSON), but I literally have no idea why this is happening... Are there any addtional constraints when saving keras model with Python to make it run on matlab?
I'd appreciate any help.
Your "keras_version":"2.2.4".
Changing it to 2.1.2 can solve this problem.
How can I merge these 2 queries into one:
Query 1
db.Response.aggregate([
{
"$and": [
{ "job_details.owner_id" : 428 },
{ "job_details.owner_type" : 'searches' }
]
},
{
"$group": {
"_id": "$candidate_city_name_string",
"count": { "$sum": 1 }
}
}
])
Query 2
db.Response.aggregate([
{
"$and": [
{ "job_details.owner_id" : 428 },
{ "job_details.owner_type" : 'searches' }
]
},
{
"$group": {
"_id": "$skill",
"count": { "$sum": 1 }
}
}
])
The result of this query like this
output 1:
{
"result": [
{ _id: 'Bangalore', count: 8 },
{ _id: 'cochi', count: 9 }
]
"ok":1
}
output 2:
{
"result": [
{ _id: 'java', count: 7 },
{ _id: 'python', count: 10 }
],
"ok":1
}
How can I get these 2 results in one query?
I need an output like this:
Expected output:
{
"result": [
"candidate_city_name_string": [
{ _id: 'Bangalore', count: 8 },
{ _id: 'cochi', count: 9 }
],
"skill": [
{ _id: 'java', count: 7 },
{ _id: 'python', count: 10 }
]
],
"ok":1
}
Is it possible? Somewhere I saw something about $facet but I didn't understand that.
db.Response.aggregate([
{"$match":{"$and":[{"job_details.owner_id" : 482},{"job_details.owner_type" : 'searches'}]}},
{$facet: {
"candidate_sublocation_name_string": [
{"$group": {"_id":"$candidate_sublocation_name_string","count": {"$sum": 1 }}}
],
"skill": [
{"$group": {"_id":"$skill","count": {"$sum": 1 }}}
]
}}])