In numpy the function for calculating the standard deviaiton expects a list of values like [1, 2, 1, 1] and calculates the standard deviation from those. In my case I have a nested list of values and counts like [[1, 2], [3, 1]], where the first list contains the values and the second contains the count of how often the corresponding values appear.
I am looking for a clean way of calculating the standard deviation for a given list like above, clean meaning
an already existing function in numpy, scipy, pandas etc.
a more pythonic approach to the problem
a more concise and nicely readable solution
I already have a working solution, that converts the nested count value list into a flattened list of values and calculates the standard deviation with the function above, but i find it not that pleasing and would rather have another option.
A minimal working example of my workaround is
import numpy as np
# The usual way
values = [1,2,1,1]
deviation = np.std(values)
print(deviation)
# My workaround for the problem
value_counts = [[1, 2], [3, 1]]
values, counts = value_counts
flattened = []
for value, count in zip(values, counts):
# append the current value count times
flattened = flattened + [value]*count
deviation = np.std(flattened)
print(deviation)
The output is
0.4330127018922193
0.4330127018922193
Thanks for any ideas or suggestions :)
You are simply looking for numpy.repeat.
numpy.std(numpy.repeat(value_counts[0], value_counts[1]))
Related
I am using pandas and uproot to read data from a .root file, and I get a table like the following one:
The aforementioned table is made with the following code:
fname = 'ZZ4lAnalysis_VBFH.root'
key = 'ZZTree/candTree'
ttree = uproot.open(fname)[key]
branches = ['Z1Flav', 'Z2Flav', 'nCleanedJetsPt30', 'LepPt', 'LepLepId']
df = ttree.pandas.df(branches, flatten=False)
I need to find the maximum value in LepPt, and, once found the maximum, I also need to retrieve the LepLepId of that maximum value.
I have no problem in finding the maximum values:
Pt_l1 = [max(i) for i in df.LepPt]
In this way I get an array with all the maximum values. However, I have to separate such values according to the LepLepId. So I need an array with the maximum LepPt and |LepLepId|=11 and one with the maximum LepPt and |LepLepId|=13.
If someone could give me any hint, advice and/or suggestion, I would be very grateful.
I made some mock data since you didn't provide yours in any easy format. I think this is what you are looking for.
import pandas as pd
df = pd.DataFrame.from_records(
[ [[1,2,3], [4,5,6]],
[[4,6,5], [7,8,9]]
],
columns=['LepPt', 'LepLepld']
)
df['max_LepPt'] = [max(i) for i in df.LepPt]
def f(row):
# get index position within list
pos = row['LepPt'].index(row['max_LepPt']).tolist()
return row['LepLepld'][pos]
df['same_index_LepLepld'] = df.apply(lambda x: f(x), axis=1)
returns:
LepPt LepLepld max_LepPt same_index_LepLepld
0 [1, 2, 3] [4, 5, 6] 3 6
1 [4, 6, 5] [7, 8, 9] 6 8
You could use the awkward.JaggedArray interface for this (one of the dependencies of uproot), which allows you to have irregularly sized arrays.
For this you would need to slightly change the way you load the data, but it allows you to use the same methods you would use with a normal numpy array, namely argmax:
fname = 'ZZ4lAnalysis_VBFH.root'
key = 'ZZTree/candTree'
ttree = uproot.open(fname)[key]
# branches = ['Z1Flav', 'Z2Flav', 'nCleanedJetsPt30', 'LepPt', 'LepLepId']
branches = ['LepPt', 'LepLepId'] # to save memory, only load what you need
# df = ttree.pandas.df(branches, flatten=False)
a = ttree.arrays(branches) # use awkward array interface
max_pt_idx = a[b'LepPt'].argmax()
max_pt_lepton_id = a[b'LepLepld'][max_pt_idx].flatten()
This is then just a normal numpy array, which you can assign to a column of a pandas dataframe if you want to. It should have the right dimensionality and order. It should also be faster than using the built-in Python functions.
Note that the keys are bytestrings, instead of normal strings and that you will have to take some extra steps if there are events with no leptons (in which case the flatten will ignore those empty events, destroying the alignment).
Alternatively, you can also convert the columns afterwards:
import awkward
df = ttree.pandas.df(branches, flatten=False)
max_pt_idx = awkward.fromiter(df["LepPt"]).argmax()
lepton_id = awkward.fromiter(df["LepLepld"])
df["max_pt_lepton_id"] = lepton_id[max_pt_idx].flatten()
The former will be faster if you don't need the columns again afterwards, otherwise the latter might be better.
I have some large unique numbers that are some sort of identity of devices
clusteringOutput[:,1]
Out[140]:
array([1.54744609e+12, 1.54744946e+12, 1.54744133e+12, ...,
1.54744569e+12, 1.54744570e+12, 1.54744571e+12])
even though the numbers are large they are only a handful of those that just repeat over the entries.
I would like to remap those into smaller ranges of integers. So if these numbers are only different 100 values I would like then to map them in the scale from 1-100 with a mapping table that allows me to find and see those mappings.
In the internet the remapping functions, typically will rescale and I do not want to rescale. I want to have concrete integer numbers that map the longer ids I have to simpler to the eyes numbers.
Any ideas on how I can implement that? I can use pandas data frames if it helps.
Thanks a lot
Alex
Use numpy.unique with return_inverse=True:
import numpy as np
arr = np.array([1.54744609e+12,
1.54744946e+12,
1.54744133e+12,
1.54744133e+12,
1.54744569e+12,
1.54744570e+12,
1.54744571e+12])
mapper, ind = np.unique(arr, return_inverse=True)
Output of ind:
array([4, 5, 0, 0, 1, 2, 3])
Remapping using mapper:
mapper[ind]
# array([1.54744609e+12, 1.54744946e+12, 1.54744133e+12, 1.54744133e+12,
# 1.54744569e+12, 1.54744570e+12, 1.54744571e+12])
Validation:
all(arr == mapper[ind])
# True
Can someone please help me out? I am trying to get the minimum value of each row and of each column of this matrix
matrix =[[12,34,28,16],
[13,32,36,12],
[15,32,32,14],
[11,33,36,10]]
So for example: I would want my program to print out that 12 is the minimum value of row 1 and so on.
Let's repeat the task statement: "get the minimum value of each row and of each column of this matrix".
Okay, so, if the matrix has n rows, you should get n minimum values, one for each row. Sounds interesting, doesn't it? So, the code'll look like this:
result1 = [<something> for row in matrix]
Well, what do you need to do with each row? Right, find the minimum value, which is super easy:
result1 = [min(row) for row in matrix]
As a result, you'll get a list of n values, just as expected.
Wait, by now we've only found the minimums for each row, but not for each column, so let's do this as well!
Given that you're using Python 3.x, you can do some pretty amazing stuff. For example, you can loop over columns easily:
result2 = [min(column) for column in zip(*matrix)] # notice the asterisk!
The asterisk in zip(*matrix) makes each row of matrix a separate argument of zip's, like this:
zip(matrix[0], matrix[1], matrix[2], matrix[3])
This doesn't look very readable and is dependent on the number of rows in matrix (basically, you'll have to hard-code them), and the asterisk lets you write much cleaner code.
zip returns tuples, and the ith tuple contains the ith values of all the rows, so these tuples are actually the columns of the given matrix.
Now, you may find this code a bit ugly, you may want to write the same thing in a more concise way. Sure enough, you can use some functional programming magic:
result1 = list(map(min, matrix))
result2 = list(map(min, zip(*matrix)))
These two approaches are absolutely equivalent.
Use numpy.
>>> import numpy as np
>>> matrix =[[12,34,28,16],
... [13,32,36,12],
... [15,32,32,14],
... [11,33,36,10]]
>>> np.min(matrix, axis=1) # computes minimum in each row
array([12, 12, 14, 10])
>>> np.min(matrix, axis=0) # computes minimum in each column
array([11, 32, 28, 10])
I'm working with a very large data set (about 75 million entries) and I'm trying to shorten the length of time that running my code takes by a significant margin (with a loop right now it will take a couple days) and keep memory usage extremely low.
I have two numpy arrays (clients and units) of the same length. My goal is to get a list of every index that a value occurs in my first list (clients) and then find a sum of the entries in my second list at each of those indices.
This is what I've tried (np is the previously imported numpy library)
# create a list of each value that appears in clients
unq = np.unique(clients)
arr = np.zeros(len(unq))
tmp = np.arange(len(clients))
# for each unique value i in clients
for i in range(len(unq)) :
#create a list inds of all the indices that i occurs in clients
inds = tmp[clients==unq[i]]
# add the sum of all the elements in units at the indices inds to a list
arr[i] = sum(units[inds])
Does anyone know a method that will allow me to find these sums without looping through each element in unq?
With Pandas, this can easily be done using the grouby() function:
import pandas as pd
# some fake data
df = pd.DataFrame({'clients': ['a', 'b', 'a', 'a'], 'units': [1, 1, 1, 1]})
print df.groupby(['clients'], sort=False).sum()
which gives you the desired output:
units
clients
a 3
b 1
I use the sort=False option since that might lead to a speed-up (by default the entries will be sorted which can take some time for huge datsets).
This is a typical group-by type operation, which can be performed elegantly and efficiently using the numpy-indexed package (disclaimer: I am its author):
import numpy_indexed as npi
unique_clients, units_per_client = npi.group_by(clients).sum(units)
Note that unlike the pandas approach, there is no need to create a temporary datastructure just to perform this kind of elementary operation.
def maxvalues():
for n in range(1,15):
dummy=[]
for k in range(len(MotionsAndMoorings)):
dummy.append(MotionsAndMoorings[k][n])
max(dummy)
L = [x + [max(dummy)]] ## to be corrected (adding columns with value max(dummy))
## suggest code to add new row to L and for next function call, it should save values here.
i have an array of size (k x n) and i need to pick the max values of the first column in that array. Please suggest if there is a simpler way other than what i tried? and my main aim is to append it to L in columns rather than rows. If i just append, it is adding values at the end. I would like to this to be done in columns for row 0 in L, because i'll call this function again and add a new row to L and do the same. Please suggest.
General suggestions for your code
First of all it's not very handy to access globals in a function. It works but it's not considered good style. So instead of using:
def maxvalues():
do_something_with(MotionsAndMoorings)
you should do it with an argument:
def maxvalues(array):
do_something_with(array)
MotionsAndMoorings = something
maxvalues(MotionsAndMoorings) # pass it to the function.
The next strange this is you seem to exlude the first row of your array:
for n in range(1,15):
I think that's unintended. The first element of a list has the index 0 and not 1. So I guess you wanted to write:
for n in range(0,15):
or even better for arbitary lengths:
for n in range(len(array[0])): # I chose the first row length here not the number of columns
Alternatives to your iterations
But this would not be very intuitive because the max function already implements some very nice keyword (the key) so you don't need to iterate over the whole array:
import operator
column = 2
max(array, key=operator.itemgetter(column))[column]
this will return the row where the i-th element is maximal (you just define your wanted column as this element). But the maximum will return the whole row so you need to extract just the i-th element.
So to get a list of all your maximums for each column you could do:
[max(array, key=operator.itemgetter(column))[column] for column in range(len(array[0]))]
For your L I'm not sure what this is but for that you should probably also pass it as argument to the function:
def maxvalues(array, L): # another argument here
but since I don't know what x and L are supposed to be I'll not go further into that. But it looks like you want to make the columns of MotionsAndMoorings to rows and the rows to columns. If so you can just do it with:
dummy = [[MotionsAndMoorings[j][i] for j in range(len(MotionsAndMoorings))] for i in range(len(MotionsAndMoorings[0]))]
that's a list comprehension that converts a list like:
[[1, 2, 3], [4, 5, 6], [0, 2, 10], [0, 2, 10]]
to an "inverted" column/row list:
[[1, 4, 0, 0], [2, 5, 2, 2], [3, 6, 10, 10]]
Alternative packages
But like roadrunner66 already said sometimes it's easiest to use a library like numpy or pandas that already has very advanced and fast functions that do exactly what you want and are very easy to use.
For example you convert a python list to a numpy array simple by:
import numpy as np
Motions_numpy = np.array(MotionsAndMoorings)
you get the maximum of the columns by using:
maximums_columns = np.max(Motions_numpy, axis=0)
you don't even need to convert it to a np.array to use np.max or transpose it (make rows to columns and the colums to rows):
transposed = np.transpose(MotionsAndMoorings)
I hope this answer is not to unstructured. Some parts are suggestions to your function and some are alternatives. You should pick the parts that you need and if you have any trouble with it, just leave a comment or ask another question. :-)
An example with a random input array, showing that you can take the max in either axis easily with one command.
import numpy as np
aa= np.random.random([4,3])
print aa
print
print np.max(aa,axis=0)
print
print np.max(aa,axis=1)
Output:
[[ 0.51972266 0.35930957 0.60381998]
[ 0.34577217 0.27908173 0.52146593]
[ 0.12101346 0.52268843 0.41704152]
[ 0.24181773 0.40747905 0.14980534]]
[ 0.51972266 0.52268843 0.60381998]
[ 0.60381998 0.52146593 0.52268843 0.40747905]