I have some event times in a list and I would like to plot an exponentially weighted moving average of them. I can do this using the following code.
import numpy as np
import matplotlib.pyplot as plt
print "Code runnning"
a=0.01
l = [3.0,7.0,10.0,20.0,200.0]
y = np.zeros(1000)
for item in l:
y[item]=1
s = np.zeros(1000)
x = np.linspace(0,1000,1000)
for i in xrange(1000):
s[i] = a*y[i-1]+(1-a)*s[i-1]
plt.plot(x, s)
plt.show()
This is clearly a horrible way to use python however. What's the right way to do this? Is it possible to do it without making all these extra sparse arrays?
The output should look like this.
Pandas comes to mind for this task:
import pandas as pd
l = [3.0,7.0,10.0,20.0,200.0]
s = pd.Series(np.ones_like(l), index=l)
y = s.reindex(range(1000), fill_value=0)
pd.ewma(y, 199).plot()
The period 199 is related to your parameter alpha 0.01 as n=2/(a+1). Result:
AFAIK there's not a very good way to do this with numpy or the scipy.sparse module -- the sparse matrices in scipy.sparse are designed to be 2D matrices, and to create one in the first place you'd basically need to use the code you've already written in your first loop (i.e., to set all of the nonzero locations in a sparse matrix), with the additional complexity of always having to specify two index values.
As if that's not bad enough, np.convolve doesn't work with sparse arrays, so you'd still need to write out the computation in your second loop to compute the moving average.
My recommendation, which probably isn't much help if you're looking for a fancy numpy version, is to fall back on Python's excellent support as a general-purpose language :
import matplotlib.pyplot as plt
a=0.01
l = set([3, 7, 10, 20, 200])
s = np.zeros(1000)
for i in xrange(len(s)):
s[i] = a * int(i-1 in l) + (1-a) * s[i-1]
plt.plot(s)
plt.show()
Here, I've stored the event index values in l, just as you did, but I used a set to make lookup times O(1) -- though if len(l) isn't very large, you might even be better off with a plain list or tuple, you'd need to measure it to be sure. Then you can avoid creating the y array and just rely on Iverson's convention to convert the Boolean value x in y into an int. You might not even need the explicit cast, but I find it helpful to be explicit.
I think you're looking for something like this:
import numpy as np
import matplotlib.pyplot as plt
from scikits.timeseries.lib.moving_funcs import mov_average_expw
l = [ 3.0, 7.0, 10.0, 20.0, 200.0 ]
y = np.zeros(1000)
y[[l]] = 1
emav = mov_average_expw(y, 199)
plt.plot(emav)
plt.show()
This makes use of mov_average_expw from scikits.timeseries. Check that method's documentation to see how I came up with the span parameter based on your code's a variable.
Related
I am using the Python codes to generate PWM signal in using vectorization method.But still facing issues.Could anyone help me in this.
import numpy as np
import matplotlib.pyplot as plt
percent=input('Enter the percentage:');
TimePeriod=input('Enter the time period:');
Cycles=input('Enter the number of cycles:');
y=1;
x=np.linspace(0,Cycles*TimePeriod,0.01);
t=(percent/100)*TimePeriod;
for n in range(0,Cycles):
y[(n*TimePeriod < x) & (x < n*TimePeriod+t)] = 1;
y[(n*TimePeriod+t < x)& (x < (n+1)*TimePeriod)] = 0;
plt.plot(y)
plt.grid()
end
A vectorized solution :
percent=30.0
TimePeriod=1.0
Cycles=10
dt=0.01
t=np.arange(0,Cycles*TimePeriod,dt);
pwm= t%TimePeriod<TimePeriod*percent/100
plot(t,pwm)
Above speed (100x than loop version here), from numpy docs :
vectorized code is more concise and easier to read
fewer lines of code generally means fewer bugs
the code more closely resembles standard mathematical notation (making it easier, typically, to correctly code mathematical constructs)
The biggest issue was that you can't assign to y[index] unless y is a vector, but you made it a number. Now there are many ways to do that periodic assignment, I personally like to use the modulo % operator.
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
percent=float(raw_input('on percentage:'))
TimePeriod=float(raw_input('time period:'))
Cycles=int(raw_input('number of cycles:'))
dt=0.01 # 0.01 appears to be your time resolution
x=np.arange(0,Cycles*TimePeriod,dt); #linspace's third argument is number of samples, not step
y=np.zeros_like(x) # makes array of zeros of the same length as x
npts=TimePeriod/dt
i=0
while i*dt< Cycles*TimePeriod:
if (i % npts)/npts < percent/100.0:
y[i]=1
i=i+1
plt.plot(x,y,'.-')
plt.ylim([-.1,1.1])
Short Question
I have a large 10000x10000 elements image, which I bin into a few hundred different sectors/bins. I then need to perform some iterative calculation on the values contained within each bin.
How do I extract the indices of each bin to efficiently perform my calculation using the bins values?
What I am looking for is a solution which avoids the bottleneck of having to select every time ind == j from my large array. Is there a way to obtain directly, in one go, the indices of the elements belonging to every bin?
Detailed Explanation
1. Straightforward Solution
One way to achieve what I need is to use code like the following (see e.g. THIS related answer), where I digitize my values and then have a j-loop selecting digitized indices equal to j like below
import numpy as np
# This function func() is just a placemark for a much more complicated function.
# I am aware that my problem could be easily sped up in the specific case of
# of the sum() function, but I am looking for a general solution to the problem.
def func(x):
y = np.sum(x)
return y
vals = np.random.random(1e8)
nbins = 100
bins = np.linspace(0, 1, nbins+1)
ind = np.digitize(vals, bins)
result = [func(vals[ind == j]) for j in range(1, nbins)]
This is not what I want as it selects every time ind == j from my large array. This makes this solution very inefficient and slow.
2. Using binned_statistics
The above approach turns out to be the same implemented in scipy.stats.binned_statistic, for the general case of a user-defined function. Using Scipy directly an identical output can be obtained with the following
import numpy as np
from scipy.stats import binned_statistics
vals = np.random.random(1e8)
results = binned_statistic(vals, vals, statistic=func, bins=100, range=[0, 1])[0]
3. Using labeled_comprehension
Another Scipy alternative is to use scipy.ndimage.measurements.labeled_comprehension. Using that function, the above example would become
import numpy as np
from scipy.ndimage import labeled_comprehension
vals = np.random.random(1e8)
nbins = 100
bins = np.linspace(0, 1, nbins+1)
ind = np.digitize(vals, bins)
result = labeled_comprehension(vals, ind, np.arange(1, nbins), func, float, 0)
Unfortunately also this form is inefficient and in particular, it has no speed advantage over my original example.
4. Comparison with IDL language
To further clarify, what I am looking for is a functionality equivalent to the REVERSE_INDICES keyword in the HISTOGRAM function of the IDL language HERE. Can this very useful functionality be efficiently replicated in Python?
Specifically, using the IDL language the above example could be written as
vals = randomu(s, 1e8)
nbins = 100
bins = [0:1:1./nbins]
h = histogram(vals, MIN=bins[0], MAX=bins[-2], NBINS=nbins, REVERSE_INDICES=r)
result = dblarr(nbins)
for j=0, nbins-1 do begin
jbins = r[r[j]:r[j+1]-1] ; Selects indices of bin j
result[j] = func(vals[jbins])
endfor
The above IDL implementation is about 10 times faster than the Numpy one, due to the fact that the indices of the bins do not have to be selected for every bin. And the speed difference in favour of the IDL implementation increases with the number of bins.
I found that a particular sparse matrix constructor can achieve the desired result very efficiently. It's a bit obscure but we can abuse it for this purpose. The function below can be used in nearly the same way as scipy.stats.binned_statistic but can be orders of magnitude faster
import numpy as np
from scipy.sparse import csr_matrix
def binned_statistic(x, values, func, nbins, range):
'''The usage is nearly the same as scipy.stats.binned_statistic'''
N = len(values)
r0, r1 = range
digitized = (float(nbins)/(r1 - r0)*(x - r0)).astype(int)
S = csr_matrix((values, [digitized, np.arange(N)]), shape=(nbins, N))
return [func(group) for group in np.split(S.data, S.indptr[1:-1])]
I avoided np.digitize because it doesn't use the fact that all bins are equal width and hence is slow, but the method I used instead may not handle all edge cases perfectly.
I assume that the binning, done in the example with digitize, cannot be changed. This is one way to go, where you do the sorting once and for all.
vals = np.random.random(1e4)
nbins = 100
bins = np.linspace(0, 1, nbins+1)
ind = np.digitize(vals, bins)
new_order = argsort(ind)
ind = ind[new_order]
ordered_vals = vals[new_order]
# slower way of calculating first_hit (first version of this post)
# _,first_hit = unique(ind,return_index=True)
# faster way:
first_hit = searchsorted(ind,arange(1,nbins-1))
first_hit.sort()
#example of using the data:
for j in range(nbins-1):
#I am using a plotting function for your f, to show that they cluster
plot(ordered_vals[first_hit[j]:first_hit[j+1]],'o')
The figure shows that the bins are actually clusters as expected:
You can halve the computation time by sorting the array first, then use np.searchsorted.
vals = np.random.random(1e8)
vals.sort()
nbins = 100
bins = np.linspace(0, 1, nbins+1)
ind = np.digitize(vals, bins)
results = [func(vals[np.searchsorted(ind,j,side='left'):
np.searchsorted(ind,j,side='right')])
for j in range(1,nbins)]
Using 1e8 as my test case, I go from 34 seconds of computation to about 17.
One efficient solution is using the numpy_indexed package (disclaimer: I am its author):
import numpy_indexed as npi
npi.group_by(ind).split(vals)
Pandas has a very fast grouping code (I think it's written in C), so if you don't mind loading the library you could do that :
import pandas as pd
pdata=pd.DataFrame({'vals':vals,'ind':ind})
resultsp = pdata.groupby('ind').sum().values
or more generally :
pdata=pd.DataFrame({'vals':vals,'ind':ind})
resultsp = pdata.groupby('ind').agg(func).values
Although the latter is slower for standard aggregation functions
(like sum, mean, etc)
I have an array where discreet sinewave values are recorded and stored. I want to find the max and min of the waveform. Since the sinewave data is recorded voltages using a DAQ, there will be some noise, so I want to do a weighted average. Assuming self.yArray contains my sinewave values, here is my code so far:
filterarray = []
filtersize = 2
length = len(self.yArray)
for x in range (0, length-(filtersize+1)):
for y in range (0,filtersize):
summation = sum(self.yArray[x+y])
ave = summation/filtersize
filterarray.append(ave)
My issue seems to be in the second for loop, where depending on my averaging window size (filtersize), I want to sum up the values in the window to take the average of them. I receive an error saying:
summation = sum(self.yArray[x+y])
TypeError: 'float' object is not iterable
I am an EE with very little experience in programming, so any help would be greatly appreciated!
The other answers correctly describe your error, but this type of problem really calls out for using numpy. Numpy will run faster, be more memory efficient, and is more expressive and convenient for this type of problem. Here's an example:
import numpy as np
import matplotlib.pyplot as plt
# make a sine wave with noise
times = np.arange(0, 10*np.pi, .01)
noise = .1*np.random.ranf(len(times))
wfm = np.sin(times) + noise
# smoothing it with a running average in one line using a convolution
# using a convolution, you could also easily smooth with other filters
# like a Gaussian, etc.
n_ave = 20
smoothed = np.convolve(wfm, np.ones(n_ave)/n_ave, mode='same')
plt.plot(times, wfm, times, -.5+smoothed)
plt.show()
If you don't want to use numpy, it should also be noted that there's a logical error in your program that results in the TypeError. The problem is that in the line
summation = sum(self.yArray[x+y])
you're using sum within the loop where your also calculating the sum. So either you need to use sum without the loop, or loop through the array and add up all the elements, but not both (and it's doing both, ie, applying sum to the indexed array element, that leads to the error in the first place). That is, here are two solutions:
filterarray = []
filtersize = 2
length = len(self.yArray)
for x in range (0, length-(filtersize+1)):
summation = sum(self.yArray[x:x+filtersize]) # sum over section of array
ave = summation/filtersize
filterarray.append(ave)
or
filterarray = []
filtersize = 2
length = len(self.yArray)
for x in range (0, length-(filtersize+1)):
summation = 0.
for y in range (0,filtersize):
summation = self.yArray[x+y]
ave = summation/filtersize
filterarray.append(ave)
self.yArray[x+y] is returning a single item out of the self.yArray list. If you are trying to get a subset of the yArray, you can use the slice operator instead:
summation = sum(self.yArray[x:y])
to return an iterable that the sum builtin can use.
A bit more information about python slices can be found here (scroll down to the "Sequences" section): http://docs.python.org/2/reference/datamodel.html#the-standard-type-hierarchy
You could use numpy, like:
import numpy
filtersize = 2
ysums = numpy.cumsum(numpy.array(self.yArray, dtype=float))
ylags = numpy.roll(ysums, filtersize)
ylags[0:filtersize] = 0.0
moving_avg = (ysums - ylags) / filtersize
Your original code attempts to call sum on the float value stored at yArray[x+y], where x+y is evaluating to some integer representing the index of that float value.
Try:
summation = sum(self.yArray[x:y])
Indeed numpy is the way to go. One of the nice features of python is list comprehensions, allowing you to do away with the typical nested for loop constructs. Here goes an example, for your particular problem...
import numpy as np
step=2
res=[np.sum(myarr[i:i+step],dtype=np.float)/step for i in range(len(myarr)-step+1)]
I intend for part of a program I'm writing to automatically generate Gaussian distributions of various statistics over multiple raw text sources, however I'm having some issues generating the graphs as per the guide at:
python pylab plot normal distribution
The general gist of the plot code is as follows.
import numpy as np
import matplotlib.mlab as mlab
import matplotlib.pyplot as pyplot
meanAverage = 222.89219487179491 # typical value calculated beforehand
standardDeviation = 3.8857889432054091 # typical value calculated beforehand
x = np.linspace(-3,3,100)
pyplot.plot(x,mlab.normpdf(x,meanAverage,standardDeviation))
pyplot.show()
All it does is produce a rather flat looking and useless y = 0 line!
Can anyone see what the problem is here?
Cheers.
If you read documentation of matplotlib.mlab.normpdf, this function is deprycated and you should use scipy.stats.norm.pdf instead.
Deprecated since version 2.2: scipy.stats.norm.pdf
And because your distribution mean is about 222, you should use np.linspace(200, 220, 100).
So your code will look like:
import numpy as np
from scipy.stats import norm
import matplotlib.pyplot as pyplot
meanAverage = 222.89219487179491 # typical value calculated beforehand
standardDeviation = 3.8857889432054091 # typical value calculated beforehand
x = np.linspace(200, 220, 100)
pyplot.plot(x, norm.pdf(x, meanAverage, standardDeviation))
pyplot.show()
It looks like you made a few small but significant errors. You either are choosing your x vector wrong or you swapped your stddev and mean. Since your mean is at 222, you probably want your x vector in this area, maybe something like 150 to 300. This way you get all the good stuff, right now you are looking at -3 to 3 which is at the tail of the distribution. Hope that helps.
I see that, for the *args which are sending meanAverage, standardDeviation, the correct thing to be sent is:
mu : a numdims array of means of a
sigma : a numdims array of atandard deviation of a
Does this help?
I am using Matplotlib to create an image based on some data. All of the data falls in the range of 0 through to 1 and I am trying to color the data based on its value using a colormap and this works perfectly in Matlab, however when converting the code across to Python I simply get a black square as the output. I believe this is because I'm plotting the image wrong and so it is plotting all the data as 0. I have tried searching this problem for several hours and I have tried plt.set_clim([0, 1]) however that didn't seem to do anything. I am new to Python and Matplotlib, although I am not new to programming (Java, javascript, PHP, etc), but I cannot see where I am going wrong. If any body can see anything glaringly incorrect in my code then I would be extremely grateful.
Thank you
from numpy import *
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.colors as myColor
e1cx=[]
e1cy=[]
e1cz=[]
print("Reading files...")
in_file = open("eigenvector_1_component_x.txt", "rt")
for line in in_file.readlines():
e1cx.append([])
for i in line.split():
e1cx[-1].append(float(i))
in_file.close()
in_file = open("eigenvector_1_component_y.txt", "rt")
for line in in_file.readlines():
e1cy.append([])
for i in line.split():
e1cy[-1].append(float(i))
in_file.close()
in_file = open("eigenvector_1_component_z.txt", "rt")
for line in in_file.readlines():
e1cz.append([])
for i in line.split():
e1cz[-1].append(float(i))
in_file.close()
print("...done")
nx = 120
ny = 128
nz = 190
fx = zeros((nz,nx,ny))
fy = zeros((nz,nx,ny))
fz = zeros((nz,nx,ny))
z = 0
while z<nz-1:
x = 0
while x<nx:
y = 0
while y<ny:
fx[z][x][y]=e1cx[(z*128)+y][x]
fy[z][x][y]=e1cy[(z*128)+y][x]
fz[z][x][y]=e1cz[(z*128)+y][x]
y += 1
x += 1
z+=1
if((z % 10) == 0):
plt.figure(num=None)
plt.axis("off")
normals = myColor.Normalize(vmin=0,vmax=1)
plt.pcolor(fx[z][:][:],cmap='spectral', norm=normals)
filename = 'Imagex_%d' % z
plt.savefig(filename)
plt.colorbar(ticks=[0,2,4,6], format='%0.2f')
Although you have resolved your original issue and have code that works, I wanted to point out that both python and numpy provide several tools that make code like this much simpler to write. Here are a few examples:
Loading data
Instead of building up lists by appending to the end of an empty one, it is often easier to generate them from other lists. For example, instead of
e1cx = []
for line in in_file.readlines():
e1cx.append([])
for i in line.split():
e1cx[-1].append(float(i))
you can simply write:
e1cx = [[float(i) for i in line.split()] for line in in_file]
The syntax [x(y) for y in l] is known as a list comprehension, and, in addition to being more concise will execute more quickly than a for loop.
However, for loading tabular data from a text file, it is even simpler to use numpy.loadtxt:
import numpy as np
e1cx = np.loadtxt("eigenvector_1_component_x.txt")
for more information,
print np.loadtxt.__doc__
See also, its slightly more sophisticated cousin numpy.genfromtxt
Reshaping data
Now that we have our data loaded, we need to reshape it. The while loops you use work fine, but numpy provides an easier way. First, if you prefer to use your method of loading the data, then convert your eigenvector arrays into proper numpy arrays using e1cx = array(e1cx), etc.
The array class provides methods for rearranging how the data in an array is indexed without requiring it to be copied. The simplest method is array.reshape, which will do half of what your while loops do:
almost_fx = e1cx.reshape((nz,ny,nx))
Here, almost_fx is a rank-3 array indexed as almost_fx[iz,iy,ix]. One important thing to be aware of is that e1cx and almost_fx share their data. So, if you change e1cx[0,0], you will also change almost_fx[0,0,0].
In your code, you swapped the x and y locations. If this is indeed what you wanted to do, you can accomplish this with array.swapaxes:
fx = almost_fx.swapaxes(1,2)
Of course, you could always combine this into one line
fx = e1cx.reshape((nz,ny,nx)).swapaxes(1,2)
However, if you want the z-slices (fx[z,:,:]) to plot with x horizontal and y vertical, you probably do not want to swap the axes above. Just reshape and plot.
Slicing arrays
Finally, rather than looping over the z-index and testing for multiples of 10, you can loop directly over a slice of the array using:
for fx_slice in fx[::10]:
# plot fx_slice and save it
This indexing syntax is array[start:end:step] where start is included in the result end is not. Leaving start blank implies 0, while leaving end blank implies the end of the list.
Summary
In summary your complete code (after introducing a few more python idioms like enumerate) could look something like:
import numpy as np
from matplotlib import pyplot as pt
shape = (190,128,120)
fx = np.loadtxt("eigenvectors_1_component_x.txt").reshape(shape).swapaxes(1,2)
for i,fx_slice in enumerate(fx[::10]):
z = i*10
pt.figure()
pt.axis("off")
pt.pcolor(fx_slice, cmap='spectral', vmin=0, vmax=1)
pt.colorbar(ticks=[0,2,4,6], format='%0.2f')
pt.savefig('Imagex_%d' % z)
Alternatively, if you want one pixel per element, you can replace the body of the for loop with
z = i*10
pt.imsave('Imagex_%d' % z, fx_slice, cmap='spectral', vmin=0, vmax=1)