I'm currently working with a database and i would like to display its values on a webpage, using highcharts.
Here is what i use to fetch the data in the web app :
#app.route("/data.json")
def data():
connection = sqlite3.connect("/home/pi/database/Main_Database.db")
cursor = connection.cursor()
cursor.execute("SELECT epochTime, data_x from table")
results = cursor.fetchall()
return json.dumps(results)
Then i currently get this value by doing this in my html:
$.getJSON('http://192.168.1.xx/data.json', function (data) {
// Create the chart
$('#container').highcharts('StockChart', {
rangeSelector : {
selected : 1
},
title : {
text : 'title'
},
series : [{
name : 'Value',
data : data,
tooltip: {
valueDecimals: 2
}, .......
This works if i want to display only one data array.
If i want to display more than one array, then it looks like each arrays must be preceded by its name respecting a certain parsing (i checked on the data sample used by highcharts).
Example:
data1:[(epochTime, 200),(epochTime,400)];data2:[(epochTime, 2),(epochTime,4)]
I have some trouble to json.dumps two arrays from two different tables for example. I tried to use this following command : json.dumps({data1:results}).
But the results is still not readable.
Do you have any advice ? Or example/templates of webapp/highcharts using sqlite?
Thanks a lot !
I think this should work:
In the controller:
Fetch 2 results and put them in a dictionary.
#app.route("/data.json")
def data():
connection = sqlite3.connect("/home/pi/database/Main_Database.db")
cursor = connection.cursor()
cursor.execute("SELECT epochTime, data_x from table")
results1 = cursor.fetchall()
cursor.execute("SELECT epochTime, data_x from table2")
results2 = cursor.fetchall()
return json.dumps({'result1': results1,
'result2': results2})
On the page:
$.getJSON('http://192.168.1.xx/data.json', function (data) {
// Create the chart
$('#container').highcharts('StockChart', {
rangeSelector : {
selected : 1
},
title : {
text : 'title'
},
series : [{
name : 'Value1',
data : data.result1,//read result1
tooltip: {
valueDecimals: 2
},
{
name : 'Value2',
data : data.result2,//read result2
tooltip: {
valueDecimals: 2
}, .......
Related
how to remove _id field while extracting mongodb document to json python. I have written the code but getting nothing in json format.
mongodb document looks like
db.collection.find().pretty()
{
"_id" : ObjectId("612334997e2f032b9f077eb7"),
"sourceAttribute" : "first_name",
"domainAttribute" : "First_Name"
}
Code tried
myclient = pymongo.MongoClient('mongodb://localhost:27017/')
mydb = myclient["guid"]
mycol = mydb["mappedfields"]
cursor = mydb.mycol.find({},{'_id':False})
list_cur = list(cursor)
json_data = dumps(list_cur, indent=1)
with open('mapping_files/mapping_from_mongodb.json', 'w') as file:
file.write(json_data)
Output Getting
[]
Expected output
[
{
"sourceAttribute": "first_name",
"domainAttribute": "First_Name"
}
]
cursor = mycol.find({},{'_id':False})
mycol -> collection name.
_id should be in second braces.
I am having a pandas data frame like below :-
I am using below code and inserting the data in mongodb:-
mydb = conn["mydatabase"]
mycol = mydb["test"]
x = results_df["user"] # result_df is the data frame.
for item in x:
mycol.collection.insert({"user" : item , },check_keys= False)
In the below format:-
{ "_id" : ObjectId("5bc0df186b3f65f926bceaeb"), "user" : ".287aa7e54ebe4088ac0a7983df4e4a28.#fnwp.vivox.com" }
{ "_id" : ObjectId("5bc0df186b3f65f926bceaec"), "user" : ".8f47cf677f9b429ab13245e12ce2fdda.#fnwp.vivox.com" }
{ "_id" : ObjectId("5bc0df186b3f65f926bceaed"), "user" : ".9ab4cdcc2cd24c9688f162817cbbbf34.#fnwp.vivox.com" }
I want to insert more row into each object id like below:-
{ "_id" : ObjectId("5bc0df186b3f65f926bceaeb"), "user" : ".287aa7e54ebe4088ac0a7983df4e4a28.#fnwp.vivox.com", "ua":"Vivox-SDK-4.9.0002.29794O" , "type":"vx_pp_log"}
I want to insert billions of rows like this and would like to keep it dynamic as may be in future i will add more rows.
Here you go :-
mydb = conn["testdb"]
mycol = mydb["test"]
user = results_df['user']
ua = results_df['ua']
time = results_df['#timestamp']
df = pd.DataFrame({'user': user, 'ua': ua, 'time': time}) # keep increasing the columns
mycol.collection.insert(df.to_dict('records'))
Here is my sample mongodb database
database image for one object
The above is a database with an array of articles. I fetched only one object for simplicity purposes.
database image for multiple objects ( max 20 as it's the size limit )
I have about 18k such entries.
I have to extract the description and title tags present inside the (articles and 0) subsections.
The find() method is the question here.. i have tried this :
for i in db.ncollec.find({'status':"ok"}, { 'articles.0.title' : 1 , 'articles.0.description' : 1}):
for j in i:
save.write(j)
After executing the code, the file save has this :
_id
articles
_id
articles
and it goes on and on..
Any help on how to print what i stated above?
My entire code for reference :
import json
import newsapi
from newsapi import NewsApiClient
import pymongo
from pymongo import MongoClient
client = MongoClient()
db = client.dbasenews
ncollec = db.ncollec
newsapi = NewsApiClient(api_key='**********')
source = open('TextsExtractedTemp.txt', 'r')
destination = open('NewsExtracteddict.txt', "w")
for word in source:
if word == '\n':
continue
all_articles = newsapi.get_everything(q=word, language='en', page_size=1)
print(all_articles)
json.dump(all_articles, destination)
destination.write("\n")
try:
ncollec.insert(all_articles)
except:
pass
Okay, so I checked a little to update my rusty memory of pymongo, and here is what I found.
The correct query should be :
db.ncollec.find({ 'status':"ok",
'articles.title' : { '$exists' : 'True' },
'articles.description' : { '$exists' : 'True' } })
Now, if you do this :
query = { 'status' : "ok",
'articles.title' : { '$exists' : 'True' },
'articles.description' : { '$exists' : 'True' } }
for item in db.ncollect.find(query):
print item
And that it doesn't show anything, the query is correct, but you don't have the right database, or the right tree, or whatever.
But I assure you, that with the database you showed me, that if you do...
query = { 'status' : "ok",
'articles.title' : { '$exists' : 'True' },
'articles.description' : { '$exists' : 'True' } }
for item in db.ncollect.find(query):
save.write(item[0]['title'])
save.write(item[0]['description'])
It'll do what you wished to do in the first place.
Now, the key item[0] might not be good, but for this, I can't really be of any help since it is was you are showing on the screen. :)
Okay, now. I have found something for you that is a bit more complicated, but is cool :)
But I'm not sure if it'll work for you. I suspect you're giving us a wrong tree, since when you do .find( {'status' : 'ok'} ), it doesn't return anything, and it should return all the documents with a 'status' : 'ok', and since you have lots...
Anyways, here is the query, that you should use with .aggregate() method, instead of .find() :
elem = { '$match' : { 'status' : 'ok', 'articles.title' : { '$exists' : 'True'}, 'articles.description' : { '$exists' : 'True'}} }
[ elem, { '$unwind' : '$articles' }, elem ]
If you want an explanation as to how this works, I invite you to read this page.
This query will return ONLY the elements in your array that have a title, and a description, with a status OK. If an element doesn't have a title, or a description, it will be ignored.
Below is my sample json. Am trying to extract "attributes" part of the json and insert into a relational database. But I needed to construct "name" values as relational columns and insert "value" values into table. I mean
{"name":"ID","value":"528BE6D9FD"} "ID" as a column and insert 528BE6D9FD under the "ID". Its just beginning of my python learning so not sure on how to construct columns from dictionary values.
d = 'C:/adapters/sample1.json'
json_data = open(d).read()
json_file = json.loads(json_data)
for children in json_file["events"]:
#print (children)
for grandchildren in children["attributes"]:
#print(grandchildren)
for key, value in grandchildren.iteritems():
#if key == 'name':
print value
{
"events":[
{
"timestamp":"2010-11-20T11:08:00.978Z",
"code":"Event",
"namespace":null,
"version":null,
"attributes":[
{
"name":"ID",
"value":"528BE6D9FD"
},
{
"name":"Total",
"value":67
},
{
"name":"PostalCode",
"value":"6064"
},
{
"name":"Category",
"value":"More"
},
{
"name":"State",
"value":"QL"
},
{
"name":"orderDateTime",
"value":"2010-07-20T12:08:13Z"
},
{
"name":"CategoryID",
"value":"1091"
},
{
"name":"billingCountry",
"value":"US"
},
{
"name":"shipping",
"value":"Go"
},
{
"name":"orderFee",
"value":77
},
{
"name":"Name",
"value":"Roy"
}
]
}
]
}
As far as extracting the attributes hash of your json data, I would do that like so:
json_path = "c:\\adapters\\sample1.json"
with open(json_path) as json_file:
json_dict = json.load(json_file)
attributes = json_dict['events'][0]['attributes']
Now, I don't know which database system you are using, but regardless, you can extract names, and values with list comprehensions like so:
names = [key['name'] for key in attributes]
values = [key['value'] for key in attributes]
And now just create a table if needed, insert names as column headers, and insert values as a single row with respect to names.
I do not understand why I get this error Bytes_Written is in the dataset but why can't python find it? I am getting this information(see dataset below) from a VM, I want to select Bytes_Written and Bytes_Read and then subtract the previous values from current value and print a json object like this
{'Bytes_Written': previousValue-currentValue, 'Bytes_Read': previousValue-currentValue}
here is what the data looks like:
{
"Number of Devices": 2,
"Block Devices": {
"bdev0": {
"Backend_Device_Path": "/dev/disk/by-path/ip-192.168.26.1:3260-iscsi-iqn.2010-10.org.openstack:volume-d1c8e7c6-8c77-444c-9a93-8b56fa1e37f2-lun-010.0.0.142",
"Capacity": "2147483648",
"Guest_Device_Name": "vdb",
"IO_Operations": "97069",
"Bytes_Written": "34410496",
"Bytes_Read": "363172864"
},
"bdev1": {
"Backend_Device_Path": "/dev/disk/by-path/ip-192.168.26.1:3260-iscsi-iqn.2010-10.org.openstack:volume-b27110f9-41ba-4bc6-b97c-b5dde23af1f9-lun-010.0.0.146",
"Capacity": "2147483648",
"Guest_Device_Name": "vdb",
"IO_Operations": "93",
"Bytes_Written": "0",
"Bytes_Read": "380928"
}
}
}
This is the complete code that I am running.
FIELDS = ("Bytes_Written", "Bytes_Read", "IO_Operation")
def counterVolume_one(state):
url = 'http://url'
r = requests.get(url)
data = r.json()
for field in FIELDS:
state[field] += data[field]
return state
state = {"Bytes_Written": 0, "Bytes_Read": 0, "IO_Operation": 0}
while True:
counterVolume_one(state)
time.sleep(1)
for field in FIELDS:
print("{field:s}: {count:d}".format(field=field, count=state[field]))
counterVolume_one(state)
Your returned JSON structure does not have any of these FIELDS = ("Bytes_Written", "Bytes_Read", "IO_Operation") keys directly.
You'll need to modify your code slightly.
data = r.json()
for block_device in data['Block Devices'].iterkeys():
for field in FIELDS:
state[field] += int(data['Block Devices'][block_device][field])