I am taking readings from 4 sensors. I get an array like this:
[{"value":0.162512,"number":0,"channel":0},
{"value":0.027835,"number":1,"channel":1},
{"value":0.08361,"number":2,"channel":2},
{"value":0.295788,"number":3,"channel":3},
{"value":0.137746,"number":4,"channel":0},
{"value":0.009403,"number":5,"channel":1},
{"value":0.089616,"number":6,"channel":2},
{"value":0.310242,"number":7,"channel":3},
{"value":0.109047,"number":8,"channel":0},
...
{"value":0.085652,"number":28,"channel":0},
{"value":0.01359,"number":29,"channel":1},
{"value":0.105441,"number":30,"channel":2},
{"value":0.32407,"number":31,"channel":3}]
I need to format and convert it into a JSON object, I guess from reading through here. I then will use flot to draw a graph. That is the goal.
I want a line graph, showing each reading off of the four sensors. I will be using this in Python eventually if that helps the direction I am going.
I have no clue what I am doing, so any direction would be appreciated.
Having no clue is not a good starting point ... See the flot documentation and examples to get started.
What you have there is one array of objects. (Already as JSON from the looks of it. If that is still on the python side, put it as a string in your javascript and call JSON.parse() on it, it is already valid JSON.)
What you need is an array of arrays (dataseries) of arrays (datapoints). Something like
[
[ // dataseries for channel 0
[0, 0.162515],
[4, 0.137746],
...
],
[ // dataseries for channel 1
[1, 0.027835],
[5, 0.009403],
...
],
...
]
To convert you can loop over your original array and put the datapoints in the right dataseries with something like this:
var dataAsArrays = [
[], [], [], [] // one empty array for each dataseries / channel
];
$.each(dataAsObjects, function (index, item) {
dataAsArrays[item.channel].push([item.number, item.value]);
});
See this fiddle for a working example of the above code.
Related
I have a Movies collection with the following structure:
{
"_id":{"$oid":"5f4fbb10c90790a35f78474b"},
"title":"Mother to Earth",
"year":2020,
"description":"A group of simps tries to find the source of an obscure meme game.",
"screenings":
[
{
"screeningID":{"$oid":"5f4fbb10c90790a35f78474a"},
"timedate":"2020-09-29, 18:00PM",
"tickets":50
}
]
}
I want to make a query that outputs the array matching screeningID. In this case, the output should be:
{
"screeningID":{"$oid":"5f4fbb10c90790a35f78474a"},
"timedate":"2020-09-29, 18:00PM",
"tickets":50
}
However, when I do a find query, it outputs the entry for the whole movie. How do I make it output exactly the array I want in screenings?
Use projections , i.e pass second parameter {screenings:1}
Here 1 means true!!
I am trying to find the highest daily high within daily kline data from binance. I am able to API call the list of lists with this code.
client.get_historical_klines('maticbtc', Client.KLINE_INTERVAL_1DAY, "2 days ago UTC")
The output is a list of daily historical prices in 'open' 'high' 'low' 'close' 'volume' format.
[[1599264000000,
'0.00000185',
'0.00000192',
'0.00000171',
'0.00000177',
'208963036.00000000',
1599350399999,
'377.04825679',
14595,
'82785887.00000000',
'150.17277108',
'0'],
[1599350400000,
'0.00000177',
'0.00000185',
'0.00000170',
'0.00000182',
'114643846.00000000',
1599436799999,
'204.99814224',
9620,
'55503278.00000000',
'99.62131279',
'0']]
I would like to find the highest 'high' value in this list. I am currently able to reference a single daily 'high' value using this code:
client.get_historical_klines('maticbtc', Client.KLINE_INTERVAL_1DAY, "30 days ago UTC")[0][2]
output:
0.00000192
Thank you for your suggestions!
I would like to find the highest 'high' value in this list.
The example you present is not a list, it is a list of lists:
data30days = [
[ ... ],
[ ... ],
...
]
In general the "highest" value is nothing else than the maximum. Therefore the code finding the maximums in such a list of lists would be:
maxima30days = [ max(x) for x in data30days ]
totalMaximum = max(maxima30days)
But there is one thing that is odd: The return data of your API. You do not receive a list of numeric values but a record of data of mixed type. Luckily the documentation of binance provides the information which value is the value you are looking for: It seems to be the third. (See: https://github.com/binance-exchange/binance-official-api-docs/blob/master/rest-api.md#klinecandlestick-data) Why this value is returned as string is unclear to me.
Therefore your code would be:
maxima30days = [ float(x[2]) for x in data30days ]
totalMaximum = max(maxima30days)
One little detail: The next time you ask a question please provide information which module you're using. In other situations this piece of information might be very essential! (Luckily here it is not.)
Please have in mind that I am not able to test the code above as unfortunately you did not provide a working example I could build upon. Therefore please test it yourself. Feel free to add a comment to my answer if you encounter any errors, I'll then try to resolve any further issues if there are any.
Assuming that you have your data as above in a variable called values,
>>> values
[[1599264000000, '0.00000185', '0.00000192', '0.00000171', '0.00000177', '208963036.00000000', 1599350399999, '377.04825679', 14595, '82785887.00000000', '150.17277108', '0'], [1599350400000, '0.00000177', '0.00000185', '0.00000170', '0.00000182', '114643846.00000000', 1599436799999, '204.99814224', 9620, '55503278.00000000', '99.62131279', '0']]
if you want the maximum value of the third element in each sublist of values (converted to float because you don't want to do a string comparison), you can do for example:
>>> max(float(lst[2]) for lst in values)
1.92e-06
The data I have is printed in a .txt file format and has breaks in between readings, its just one really long line. Each data point is a 16 second average of the z-component of the magnetic field of an incoming particle field. This is currently the code I have typed to ascribe the variable name to the file
Bz = np.loadtxt(r'C:\Users\Schmidt\Desktop\Project\Data\ACE\MAG\ACE_MAG_Data.txt', dtype = str)
and that works fine, but when I ask to print Bz I get
[["b'-1.3695e+01'" "b'-1.3481e+01'"]
["b'-1.3804e+01'" "b'-1.3485e+01'"]
["b'-1.3704e+01'" "b'-1.3437e+01'"]
...,
["b'1.6371e+00'" "b'6.2744e-01'"]
["b'1.6171e+00'" "b'6.1338e-01'"]
["b'1.4028e+00'" "b'3.2874e-01'"]]
What my problem is how did that "b" get there in the first place and how do I tell python that each data point is an individual point instead of pairs like it has it now.
This is the link to the file if you need to see. Just remember to remove the words and the file should act appropriately.
numpy is loading your data as bytes, and marking them as such, with the b character. I tried the following:
data = np.loadtxt("ACE_MAG_Data.txt", dtype=bytes).astype(float)
And it converts everything to floats instead:
>>> data
array([[-13.695 , -13.481 ],
[-13.804 , -13.485 ],
[-13.704 , -13.437 ],
...,
[ 1.6371 , 0.62744],
[ 1.6171 , 0.61338],
[ 1.4028 , 0.32874]])
You mention that these are individual points, not pairs. If you had saved them as points in the file, one per line, numpy wouldn't assume these are pairs - I would too :)
However:
singles = [point for pair in data for point in pair]
Will convert it to a list of single points.
I have a text file which have 541 lists and each list has 280 numbers such as below:
[301.82779832839964, 301.84247725804647, 301.85718673070272, ..., 324.4056396484375, 324.20379638671875, 324.00198364257812]
.
.
[310.6907599572782, 310.68334604280966, 310.67756809346469,..., 324.23541883368551, 324.18277040240207, 324.09177971086382]
To read this text file, I used numpy.genfromtxt making a code to read the first list for the test such as:
pt1 = np.genfromtxt(filn1,dtype=np.float64,delimiter=",")
print pt1[0].shape
print list(pt1[0])
I expected that I could see the full list of the first list but the result list showed 'nan' in the first and the last place as below:
[nan, 301.84247725804647, 301.85718673070272, ..., 324.4056396484375, 324.20379638671875, nan]
I have tried other option in numpy.genfromtxt, I couldn't find why it resulted 'nan' in the first and the last place in the list. This event was not only for the first list, but also for all lists.
Any idea or help would be really appreciated.
Thank you,
Isaac
import numpy as np
from ast import literal_eval
pt1 = np.array(map(literal_eval,open("in.txt")))
For:
[301.82779832839964, 301.84247725804647, 301.85718673070272, 324.4056396484375, 324.20379638671875, 324.00198364257812]
[310.6907599572782, 310.68334604280966, 310.67756809346469, 324.23541883368551, 324.18277040240207, 324.09177971086382]
You will get:
[[ 301.82779833 301.84247726 301.85718673 324.40563965 324.20379639
324.00198364]
[ 310.69075996 310.68334604 310.67756809 324.23541883 324.1827704
324.09177971]]
It looks like the problem is caused by the square brackets in your textfile; the simplest solution would be to remove these characters from your file, either just using find-replace in a text editor, or if you file is too large, by using a command-line tool like sed.
It's applying 'nan' to the [ and ] in your files. As a last resort you could do something like this:
data = []
d = file('filn').read().split('\n')
for line in d:
if line:
data.append(eval(line))
data = np.asarray(data)
Alternatively you can replace the [ and ] for the whole file, and then you can use np.genfromtxt(filn1,dtype=np.float64,delimiter=",") like you were before, without getting and nan elements.
I have my geographical coordinates of rectangles represented as numpy ndarray like this:
(each row corresponds to a rectangle and each column contains its lower left and upper right longitudes and latitudes)
array([
[ 116.17265886, 39.92265886, 116.1761427 , 39.92536232],
[ 116.20749721, 39.90373467, 116.21098105, 39.90643813],
[ 116.21794872, 39.90373467, 116.22143255, 39.90643813]])
I want to call a coordinate-converting API whose input is a string like this:
'lon_0,lat_0;lon_1,lat_1;lon_2,lat_2;...;lon_n,lat_n'
So I wrote a stupid iteration to convert my ndarray to the required string like this:
coords = ''
for i in range(0, my_rectangle.shape[0]):
coords = coords + '{left_lon},{left_lat};{right_lon},{rigth_lat}'.format(left_lon = my_rectangle[i][0], left_lat = my_rectangle[i][1], \
right_lon = my_rectangle[i][2], rigth_lat = my_rectangle[i][3])
if i != my_rectangle.shape[0] - 1:
coords = coords + ';'
And the output is like this:
'116.172658863,39.9226588629;116.176142698,39.9253623188;116.207497213,39.9037346711;116.210981048,39.9064381271;116.217948718,39.9037346711;116.221432553,39.9064381271'
I'm wondering whether there exists a smarter & faster approach achieving this without iteration(or some helpful documentation I could refer to)?
Let's try using functional style:
values = [[ 116.17265886, 39.92265886, 116.1761427 , 39.92536232],
[ 116.20749721, 39.90373467, 116.21098105, 39.90643813],
[ 116.21794872, 39.90373467, 116.22143255, 39.90643813]]
def prettyPrint(coords):
return '{0},{1};{2},{3}'.format(coords[0], coords[1], coords[2], coords[3])
asString = formating(list(map(prettyPrint,values)))
print(";".join(asString)) #edited thanks to comments
map apply a function to each element of an iterable. So you define the process to apply on one element, and then using map replace each element by its result.
Hope you find it smarter ;)
Edit :
You can also write it like this :
asString = [prettyPrint(value) for value in values]