I have a large (~100GB) data set xs of structured numpy arrays x where I want to bin each by a property p1 and want to find the mean and standard deviation of property p2 in each bin. My method shown below works, but is quite slow. Is there any faster/more numpythonic way to do this? I can't fit the whole dataset in memory, but I do have lots of cores, so a nice way to parallelise it would also be nice.
nbins=30
bin_edges=np.linspace(0,1,nbins)
N, p2_total, means_p2, stds_p2 = np.zeros((4,nbins))
for x in xs_generator():
p1s = x['p1']
p2s = x['p2']
which_bin=np.digitize(p1s,bins=bin_edges)
for this_bin,bin_edge in enumerate(bin_edges):
these_p1s = p1s[which_bin==this_bin]
these_p2s = p2s[which_bin==this_bin]
N[this_bin] += np.size(these_p1s)
p2_total[this_bin] += np.sum(these_p2s)
p2sq_total[this_bin] += np.sum(these_p2s**2)
means_p2 = p2_total/N
stds_p2 = np.sqrt(p2sq_total/N**2)
you should use np.histogram:
N, binDump = np.histogram( p1s, bins=bin_edges )
p2_total, binDump = np.histogram( p1s, bins=bin_edges, weights=p2s )
p2sq_total, binDump = np.histogram( p1s, bins=bin_edges, weights=p2s**2 )
means_p2 = p2_total/N
stds_p2 = np.sqrt(p2sq_total/N**2)
like this you avoid the loop, you just re-write the histogram function :)
Related
I want to have the legend of the plot shown with the value in a list. But what I get is the element index but not the value itself. I dont know how to fix it. I'm referring to the plt.plot line. Thanks for the help.
import matplotlib.pyplot as plt
import numpy as np
x = np.random.random(1000)
y = np.random.random(1000)
n = len(x)
d_ij = []
for i in range(n):
for j in range(i+1,n):
a = np.sqrt((x[i]-x[j])**2+(y[i]-y[j])**2)
d_ij.append(a)
epsilon = np.linspace(0.01,1,num=10)
sigma = np.linspace(0.01,1,num=10)
def lj_pot(epsi,sig,d):
result = []
for i in range(len(d)):
a = 4*epsi*((sig/d[i])**12-(sig/d[i])**6)
result.append(a)
return result
for i in range(len(epsilon)):
for j in range(len(sigma)):
a = epsilon[i]
b = sigma[j]
plt.cla()
plt.ylim([-1.5, 1.5])
plt.xlim([0, 2])
plt.plot(sorted(d_ij),lj_pot(epsilon[i],sigma[j],sorted(d_ij)),label = 'epsilon = %d, sigma =%d' %(a,b))
plt.legend()
plt.savefig("epsilon_%d_sigma_%d.png" % (i,j))
plt.show()
Your code is a bit unpythonic, so I tried to clean it up to the best of my knowledge. numpy.random.random and numpy.random.uniform(0, 1) are basically the same, however, the latter also allows you to pass the shape of the return array that you would like to have, in this case an array with 1000 rows and two columns (1000, 2). I then use some magic to assign the two colums of the return array to x and y in the same line, respectively.
numpy.hypot does as the name suggests and calculates the hypothenuse of x and y. It can also do that for each entry of arrays with the same size, saving you the for loops, which you should try to aviod in Python since they are pretty slow.
You used plt for all your plotting, which is fine as long as you only have one figure, but I would recommend to be as explicit as possible, according to one of Python's key notions:
explicit is better than implicit.
I recommend you read through this guide, in particular the section called 'Stateful Versus Stateless Approaches'. I changed your commands accordingly.
It is also very unpythonic to loop over items of a list using the index of the item in the list like you did (for i in range(len(list)): item = list[i]). You can just reference the item directly (for item in list:).
Lastly I changed your formatted strings to the more convenient f-strings. Have a read here.
import matplotlib.pyplot as plt
import numpy as np
def pot(epsi, sig, d):
result = 4*epsi*((sig/d)**12 - (sig/d)**6)
return result
# I am not sure why you would create the independent variable this way,
# maybe you are simulating something. In that case, the code below is
# simpler than your version and should achieve the same.
# x, y = zip(*np.random.uniform(0, 1, (1000, 2)))
# d = np.array(sorted(np.hypot(x, y)))
# If you only want to plot your pot function then creating the value range
# like this is just fine.
d = np.linspace(0.001, 1, 1000)
epsilons = sigmas = np.linspace(0.01, 1, num=10)
fig, ax = plt.subplots()
ax.set_xlim([0, 2])
ax.set_ylim([-1.5, 1.5])
line = None
for epsilon in epsilons:
for sigma in sigmas:
if line is None:
line = ax.plot(
d, pot(epsilon, sigma, d),
label=f'epsilon = {epsilon}, sigma = {sigma}'
)[0]
fig.legend()
else:
line.set_data(d, pot(epsilon, sigma, d))
# plt.savefig(f"epsilon_{epsilon}_sigma_{sigma}.png")
fig.show()
I want to read an grayscale image, say something with (248, 480, 3) shape, then use each element of it as the lam value for making a Poisson random value and do this for each element and make a new data set with the same shape. I want to do this as much as nscan, then I want to add them all together and put them in a new data set and plot it again to get something that is similar to the first image that I put in the beginning. This code is working but it is extremely slow, I was wondering if there is any way to make it faster?
import numpy as np
import matplotlib.pyplot as plt
my_image = plt.imread('myimage.png')
def genP(data):
new_data = np.zeros(data.shape)
for i in range(data.shape[0]):
for j in range(data.shape[1]):
for k in range(data.shape[2]):
new_data[i, j, k] = np.random.poisson(lam = data[i, j, k])
return new_data
def get_total(data, nscan = 1):
total = genP(data)
for i in range(nscan):
total += genP(data)
total = total/nscan
plt.imshow(total)
plt.show()
get_total(my_image, 100)
numpy.random.poisson can entirely replace your genP() function... This is basically guaranteed to be much faster.
If size is None (default), a single value is returned if lam is a scalar. Otherwise, np.array(lam).size samples are drawn
def get_total(data, nscan = 1):
total = np.random.poisson(lam=data)
for i in range(nscan):
total += np.random.poisson(lam=data)
total = total/nscan
plt.imshow(total)
plt.show()
I have a project where I'm sampling analog data and attempting to analyze with matplotlib. Currently, my analog data source is a potentiometer hooked up to a microcontroller, but that's not really relevant to the issue. Here's my code
arrayFront = RunningMean(array(dataFront), 15)
arrayRear = RunningMean(array(dataRear), 15)
x = linspace(0, len(arrayFront), len(arrayFront)) # Generate x axis
y = linspace(0, len(arrayRear), len(arrayRear)) # Generate x axis
min_vals_front = scipy.signal.argrelmin(arrayFront, order=2)[0] # Min
min_vals_rear = scipy.signal.argrelmin(arrayRear, order=2)[0] # Min
max_vals_front = scipy.signal.argrelmax(arrayFront, order=2)[0] # Max
max_vals_rear = scipy.signal.argrelmax(arrayRear, order=2)[0] # Max
maxvalfront = max(arrayFront[max_vals_front])
maxvalrear = max(arrayRear[max_vals_rear])
minvalfront = min(arrayFront[min_vals_front])
minvalrear = min(arrayRear[min_vals_rear])
plot(x, arrayFront, label="Front Pressures")
plot(y, arrayRear, label="Rear Pressures")
plot(x[min_vals_front], arrayFront[min_vals_front], "x")
plot(x[max_vals_front], arrayFront[max_vals_front], "o")
plot(y[min_vals_rear], arrayRear[min_vals_rear], "x")
plot(y[max_vals_rear], arrayRear[max_vals_rear], "o")
xlim(-25, len(arrayFront) + 25)
ylim(-1000, 7000)
legend(loc='upper left')
show()
dataFront and dataRear are python lists that hold the sampled data from 2 potentiometers. RunningMean is a function that calls:
convolve(x, ones((N,)) / N, mode='valid')
The problem is that the argrelmax (and min) functions don't always find all the maxes and mins. Sometimes it doesn't find ANY max or mins, and that causes me problems in this block of code
maxvalfront = max(arrayFront[max_vals_front])
maxvalrear = max(arrayRear[max_vals_rear])
minvalfront = min(arrayFront[min_vals_front])
minvalrear = min(arrayRear[min_vals_rear])
because the [min_vals_(blank)] variables are empty. Does anyone have any idea what is happening here, and what I can do to fix the problem? Thanks in advance.
Here's one of graphs of data where not all the maxes and mins are found:
signal.argrelmin is a thin wrapper around signal.argrelextrema with comparator=np.less. np.less(a, b) returns the truth value of a < b element-wise. Notice that np.less requires a to be strictly less than b for it to be True.
Your data has the same minimum value at a lot of neighboring locations. At the local minima, the inequality between local minimum and its neighbors does not satisfy a strictly less than relationship; instead it only satisfies a strictly less than or equal to relationship.
Therefore, to find these extrema use signal.argrelmin with comparator=np.less_equal. For example, using a snippet from your data:
import numpy as np
from scipy import signal
arrayRear = np.array([-624.59309896, -624.59309896, -624.59309896,
-625., -625., -625.,])
print(signal.argrelmin(arrayRear, order=2)[0])
# []
print(signal.argrelextrema(arrayRear, np.less_equal)[0])
# [0 1 3 4 5]
print(signal.argrelextrema(arrayRear, np.less_equal, order=2)[0])
# [0 3 4 5]
So I need to calculate the joint probability distribution for N variables. I have code for two variables, but I am having trouble generalizing it to higher dimensions. I imagine there is some sort of pythonic vectorization that could be helpful, but, right now my code is very C like (and yes I know that is not the right way to write Python). My 2D code is below:
import numpy
import math
feature1 = numpy.array([1.1,2.2,3.0,1.2,5.4,3.4,2.2,6.8,4.5,5.6,1.9,2.8,3.7,4.4,7.3,8.3,8.1,7.0,8.0,6.8,6.2,4.9,5.7,6.3,3.7,2.4,4.5,8.5,9.5,9.9]);
feature2 = numpy.array([11.1,12.8,13.0,11.6,15.2,13.8,11.1,17.8,12.5,15.2,11.6,20.8,14.7,14.4,15.3,18.3,11.4,17.0,16.0,16.8,12.2,14.9,15.7,16.3,13.7,12.4,14.2,18.5,19.8,19.0]);
#===Concatenate All Features===#
numFrames = len(feature1);
allFeatures = numpy.zeros((2,numFrames));
allFeatures[0,:] = feature1;
allFeatures[1,:] = feature2;
#===Create the Array to hold all the Bins===#
numBins = int(0.25*numFrames);
allBins = numpy.zeros((allFeatures.shape[0],numBins+1));
#===Find the maximum and minimum of each feature===#
allRanges = numpy.zeros((allFeatures.shape[0],2));
for f in range(allFeatures.shape[0]):
allRanges[f,0] = numpy.amin(allFeatures[f,:]);
allRanges[f,1] = numpy.amax(allFeatures[f,:]);
#===Create the Array to hold all the individual feature probabilities===#
allIndividualProbs = numpy.zeros((allFeatures.shape[0],numBins));
#===Grab all the Individual Probs and the Bins===#
for f in range(allFeatures.shape[0]):
freqhist, binedges = numpy.histogram(allFeatures[f,:],bins=numBins,range=[allRanges[f,0],allRanges[f,1]],density=False);
allBins[f,:] = binedges;
allIndividualProbs[f,:] = freqhist;
#===Create the joint probability array===#
jointProbs = numpy.zeros((numBins,numBins));
#===Compute the joint probability distribution===#
numElements = 0;
for b1 in range(numBins):
for b2 in range(numBins):
for f1 in range(numFrames):
for f2 in range(numFrames):
if ( ( (feature1[f1] >= allBins[0,b1]) and (feature1[f1] <= allBins[0,b1+1]) ) and ((feature2[f2] >= allBins[1,b2]) and (feature2[f2] <= allBins[1,b2+1])) ):
jointProbs[b1,b2] += 1;
numElements += 1;
jointProbs /= numElements;
#===But what if I add the following===#
feature3 = numpy.array([21.1,21.8,23.5,27.6,25.2,23.8,22.1,22.8,26.5,25.2,28.6,20.8,24.7,24.4,29.3,28.3,27.4,26.0,26.2,26.1,25.9,24.0,22.7,22.3,23.7,26.4,24.2,28.5,29.8,29.0]);
How can I generalize the large loop? For N variables (features) this loop would be enormous. Is there a Pythonic way to do this easily?
Check out the function numpy.histogramdd. This function can compute histograms in arbitrary numbers of dimensions. If you set the parameter normed=True, it returns the bin count divided by the bin hypervolume. If you'd prefer something more like a probability mass function (where everything sums to 1), just normalize it yourself. All together, you'll have something like:
import numpy as np
numBins = 10 # number of bins in each dimension
data = np.random.randn(100000, 3) # generate 100000 3-d random data points
jointProbs, edges = np.histogramdd(data, bins=numBins)
jointProbs /= jointProbs.sum()
I've read here that matplotlib is good at handling large data sets. I'm writing a data processing application and have embedded matplotlib plots into wx and have found matplotlib to be TERRIBLE at handling large amounts of data, both in terms of speed and in terms of memory. Does anyone know a way to speed up (reduce memory footprint of) matplotlib other than downsampling your inputs?
To illustrate how bad matplotlib is with memory consider this code:
import pylab
import numpy
a = numpy.arange(int(1e7)) # only 10,000,000 32-bit integers (~40 Mb in memory)
# watch your system memory now...
pylab.plot(a) # this uses over 230 ADDITIONAL Mb of memory
Downsampling is a good solution here -- plotting 10M points consumes a bunch of memory and time in matplotlib. If you know how much memory is acceptable, then you can downsample based on that amount. For example, let's say 1M points takes 23 additional MB of memory and you find it to be acceptable in terms of space and time, therefore you should downsample so that it's always below the 1M points:
if(len(a) > 1M):
a = scipy.signal.decimate(a, int(len(a)/1M)+1)
pylab.plot(a)
Or something like the above snippet (the above may downsample too aggressively for your taste.)
I'm often interested in the extreme values too so, before plotting large chunks of data, I proceed in this way:
import numpy as np
s = np.random.normal(size=(1e7,))
decimation_factor = 10
s = np.max(s.reshape(-1,decimation_factor),axis=1)
# To check the final size
s.shape
Of course np.max is just an example of extreme calculation function.
P.S.
With numpy "strides tricks" it should be possible to avoid copying data around during reshape.
I was interested in preserving one side of a log sampled plot so I came up with this:
(downsample being my first attempt)
def downsample(x, y, target_length=1000, preserve_ends=0):
assert len(x.shape) == 1
assert len(y.shape) == 1
data = np.vstack((x, y))
if preserve_ends > 0:
l, data, r = np.split(data, (preserve_ends, -preserve_ends), axis=1)
interval = int(data.shape[1] / target_length) + 1
data = data[:, ::interval]
if preserve_ends > 0:
data = np.concatenate([l, data, r], axis=1)
return data[0, :], data[1, :]
def geom_ind(stop, num=50):
geo_num = num
ind = np.geomspace(1, stop, dtype=int, num=geo_num)
while len(set(ind)) < num - 1:
geo_num += 1
ind = np.geomspace(1, stop, dtype=int, num=geo_num)
return np.sort(list(set(ind) | {0}))
def log_downsample(x, y, target_length=1000, flip=False):
assert len(x.shape) == 1
assert len(y.shape) == 1
data = np.vstack((x, y))
if flip:
data = np.fliplr(data)
data = data[:, geom_ind(data.shape[1], num=target_length)]
if flip:
data = np.fliplr(data)
return data[0, :], data[1, :]
which allowed me to better preserve one side of plot:
newx, newy = downsample(x, y, target_length=1000, preserve_ends=50)
newlogx, newlogy = log_downsample(x, y, target_length=1000)
f = plt.figure()
plt.gca().set_yscale("log")
plt.step(x, y, label="original")
plt.step(newx, newy, label="downsample")
plt.step(newlogx, newlogy, label="log_downsample")
plt.legend()