I try to get 20 data points drawn from a random distribution. It's straightforward but two methods below give different results, despite the seed is the same.
Method 1
np.random.seed(1)
x1 = np.random.uniform(low=-10, high=10, size=20)
y1 = np.random.uniform(low=20, high=80, size=20)
The result is:
x1 = [-1.65955991 4.40648987 -9.9977125 -3.95334855 -7.06488218 -8.1532281
-6.27479577 -3.08878546 -2.06465052 0.77633468 -1.61610971 3.70439001
-5.91095501 7.56234873 -9.45224814 3.4093502 -1.65390395 1.17379657
-7.19226123 -6.03797022]
and
y1 = [68.04467412 78.09569454 38.80545069 61.53935694 72.58334914 73.67639981
25.10265268 22.34328699 30.18982517 72.68855021 25.90081003 45.2664575
77.47337181 51.9899171 61.51262684 38.93093786 61.19005566 70.07754031
21.09729664 65.0086589 ]
Method 2
N = 20
np.random.seed(1)
points = [(np.random.uniform(-10,10), np.random.uniform(20,80)) for i in range(N)]
The result is:
[(-1.6595599059485195, 63.219469606529486), (-9.997712503653101, 38.13995435791038), (-7.064882183657739, 25.54031568612787), (-6.274795772446582, 40.73364362258286), (-2.0646505153866013, 52.32900404020142), (-1.6161097119341044, 61.11317002380557), (-5.910955005369651, 72.68704618345672), (-9.452248136041476, 60.22805061070413), (-1.6539039526574602, 53.5213897067451), (-7.192261228095324, 31.886089345092728), (6.014891373510732, 78.09569454316386), (-3.7315164368151432, 61.53935694015885), (7.527783045920767, 73.67639981023083), (-8.299115772604441, 22.343286993972942), (-6.6033916087086215, 72.68855020576478), (-8.033063323338999, 45.26645750030313), (9.157790603010039, 51.98991709838103), (3.837542279009467, 38.93093786036378), (3.7300185536316732, 70.07754031384238), (-9.634234453116164, 65.00865889669805)]
Could anyone help with explaining the difference?
The first method first generates 20 numbers from the first distribution, followed by 20 numbers from the second distribution. In the second method, you alternate the distribution from which numbers are being generated. These methods do not generate corresponding numbers in the same order, so you should not expect to get the same results. Each time you generate a random number, the internal state of the generator changes, and that change affects all subsequent invocations of the generator, regardless of whether it is applied to the same distribution. numpy.random methods all use the same global internal state.
As an aside, NumPy recommends the use of numpy.random.Generator methods instead of numpy.random or numpy.random.RandomState methods (see here).
Related
I have successfully generated three correlated random variables with Cholesky. I use the same mean(10) and the same standard deviation(5) for all of them. However, I tried to calculate the expected mean of the correlated variables, but I got some an unpleasant results I can't seem to know where exactly the problem. Please here is a working code:
import numpy as np
import pandas as pd
corr = np.array([[1,0.7,0.7], [0.7,1,0.7],[0.7,0.7,1]])
chol = np.linalg.cholesky(corr)
N=1000
rand_data = np.random.normal(10, 5, size=(3,N))
# generate uncorrelated data
uncorrelated_data = pd.DataFrame(rand_data, index=['A','B','C']).T/100
uncorrelated_data.corr() # shows barely any correlation as it should
uncorrelated_data.mean()*100 # shows each mean around 10
Output
A 10.308595
B 9.931958
C 10.165347
Generating correlation among them
x = np.dot(chol, rand_data) # cholesky
correlated_data = pd.DataFrame(x, index=['A','B','C']).T/100
print(correlated_data.corr()) # shows there are correlations among variable
sim_corr_rets.mean()*100 # mean keep increasing in across the variables
Output:
A 10.308595
B 14.308853
C 16.752117
The means of the uncorrelated variables were as expected but the mean of the correlated variables keeps increasing from the first variable to the last variable. My expectation is that each mean will be around the actual mean. Please could my noble seniors help me figure out the problem or suggest an alternative solution?
I'm having some trouble using matplotlib to plot the path of something.
Here's a basic version of the type of thing I'm doing.
Essentially, I'm seeing if the value breaks a certain threshold (6 in this case) at any point during the path and then doing something with it later on.
Now, I have 3 lists set-up. The end_vector will be based on the other two lists. If the value breaks past 2 any time during a single simulation, I will add the last position of the object to my end_vector
trajectories_vect is something I want to keep track of my trajectories for all 5 simulations, by keeping a list of lists. I'll clarify this below. And, timestep_vect stores the path for a single simulation.
from random import gauss
from matplotlib import pyplot as plt
import numpy as np
starting_val = 5
T = 1 #1 year
delta_t = .1 #time-step
N = int(T/delta_t) #how many points on the path looked at
trials = 5 #number of simulations
#main iterative loop
end_vect = []
trajectories_vect = []
for k in xrange(trials):
s_j = starting_val
timestep_vect = []
for j in xrange(N-1):
xi = gauss(0,1.0)
s_j *= xi
timestep_vect.append(s_j)
trajectories_vect.append(timestep_vect)
if max(timestep_vect) > 5:
end_vect.append(timestep_vect[-1])
else:
end_vect.append(0)
Okay, at this part if I print my trajectories, I get something like this (I only posted two simulations, instead of the full 5):
[[ -3.61689976e+00 2.85839230e+00 -1.59673115e+00 6.22743522e-01
1.95127718e-02 -1.72827152e-02 1.79295788e-02 4.26807446e-02
-4.06175288e-02] [ 4.29119818e-01 4.50321728e-01 -7.62901016e-01
-8.31124346e-02 -6.40330554e-03 1.28172906e-02 -1.91664737e-02
-8.29173982e-03 4.03917926e-03]]
This is good and what I want to happen.
Now, my problem is that I don't know how to plot my path (y-axis) against my time (x-axis) properly.
First, I want to put my data into numpy arrays because I'll need to use them later on to compute some statistics and other things which from experience numpy makes very easy.
#creating numpy arrays from list
#might need to use this with matplotlib somehow
np_trajectories = np.array(trajectories_vect)
time_array = np.arange(1,10)
Here's the crux of the issue though. When i'm putting my trajectories (y-axis) into matplotlib, it's not treating each "list" (row in numpy) as one path. Instead of getting 5 paths for 5 simulations, I am getting 9 paths for 5 simulations. I believe I am inputing stuff wrong hence it is using the 9 time intervals in the wrong way.
#matplotlib stuff
plt.plot(np_trajectories)
plt.xlabel('timestep')
plt.ylabel('trajectories')
plt.show()
Here's the image produced:
Obviously, this is wrong for the aforementioned reason. Instead, I want to have 5 paths based on the 5 lists (rows) in my trajectories. I seem to understand what the problem is but don't know how to go about fixing it.
Thanks in advance for the help.
When you call np_trajectories = np.array(trajectories_vect), your list of trajectories is transformed into a 2d numpy array. The information about its dimensions is stored in np_trajectories.shape, and, in your case, is (5, 9). Therefore, when you pass np_trajectories to plt.plot(), the plotting library assumes that the y-values are stored in the first dimension, while the second dimension describes individual lines to plot.
In your case, all you need to do is to transpose your np_trajectories array. In numpy, it is as simple as
plt.plot(np_trajectories.T)
plt.xlabel('timestep')
plt.ylabel('trajectories')
plt.show()
If you want to plot the x-axis as time, instead of steps of one, you have to define your time progression as a list or an array. In numpy, you can do something like
times = np.linspace(0, T, N-1)
plt.plot(times, np_trajectories.T)
plt.xlabel('timestep')
plt.ylabel('trajectories')
plt.show()
which produces the following figure:
Short Question
I have a large 10000x10000 elements image, which I bin into a few hundred different sectors/bins. I then need to perform some iterative calculation on the values contained within each bin.
How do I extract the indices of each bin to efficiently perform my calculation using the bins values?
What I am looking for is a solution which avoids the bottleneck of having to select every time ind == j from my large array. Is there a way to obtain directly, in one go, the indices of the elements belonging to every bin?
Detailed Explanation
1. Straightforward Solution
One way to achieve what I need is to use code like the following (see e.g. THIS related answer), where I digitize my values and then have a j-loop selecting digitized indices equal to j like below
import numpy as np
# This function func() is just a placemark for a much more complicated function.
# I am aware that my problem could be easily sped up in the specific case of
# of the sum() function, but I am looking for a general solution to the problem.
def func(x):
y = np.sum(x)
return y
vals = np.random.random(1e8)
nbins = 100
bins = np.linspace(0, 1, nbins+1)
ind = np.digitize(vals, bins)
result = [func(vals[ind == j]) for j in range(1, nbins)]
This is not what I want as it selects every time ind == j from my large array. This makes this solution very inefficient and slow.
2. Using binned_statistics
The above approach turns out to be the same implemented in scipy.stats.binned_statistic, for the general case of a user-defined function. Using Scipy directly an identical output can be obtained with the following
import numpy as np
from scipy.stats import binned_statistics
vals = np.random.random(1e8)
results = binned_statistic(vals, vals, statistic=func, bins=100, range=[0, 1])[0]
3. Using labeled_comprehension
Another Scipy alternative is to use scipy.ndimage.measurements.labeled_comprehension. Using that function, the above example would become
import numpy as np
from scipy.ndimage import labeled_comprehension
vals = np.random.random(1e8)
nbins = 100
bins = np.linspace(0, 1, nbins+1)
ind = np.digitize(vals, bins)
result = labeled_comprehension(vals, ind, np.arange(1, nbins), func, float, 0)
Unfortunately also this form is inefficient and in particular, it has no speed advantage over my original example.
4. Comparison with IDL language
To further clarify, what I am looking for is a functionality equivalent to the REVERSE_INDICES keyword in the HISTOGRAM function of the IDL language HERE. Can this very useful functionality be efficiently replicated in Python?
Specifically, using the IDL language the above example could be written as
vals = randomu(s, 1e8)
nbins = 100
bins = [0:1:1./nbins]
h = histogram(vals, MIN=bins[0], MAX=bins[-2], NBINS=nbins, REVERSE_INDICES=r)
result = dblarr(nbins)
for j=0, nbins-1 do begin
jbins = r[r[j]:r[j+1]-1] ; Selects indices of bin j
result[j] = func(vals[jbins])
endfor
The above IDL implementation is about 10 times faster than the Numpy one, due to the fact that the indices of the bins do not have to be selected for every bin. And the speed difference in favour of the IDL implementation increases with the number of bins.
I found that a particular sparse matrix constructor can achieve the desired result very efficiently. It's a bit obscure but we can abuse it for this purpose. The function below can be used in nearly the same way as scipy.stats.binned_statistic but can be orders of magnitude faster
import numpy as np
from scipy.sparse import csr_matrix
def binned_statistic(x, values, func, nbins, range):
'''The usage is nearly the same as scipy.stats.binned_statistic'''
N = len(values)
r0, r1 = range
digitized = (float(nbins)/(r1 - r0)*(x - r0)).astype(int)
S = csr_matrix((values, [digitized, np.arange(N)]), shape=(nbins, N))
return [func(group) for group in np.split(S.data, S.indptr[1:-1])]
I avoided np.digitize because it doesn't use the fact that all bins are equal width and hence is slow, but the method I used instead may not handle all edge cases perfectly.
I assume that the binning, done in the example with digitize, cannot be changed. This is one way to go, where you do the sorting once and for all.
vals = np.random.random(1e4)
nbins = 100
bins = np.linspace(0, 1, nbins+1)
ind = np.digitize(vals, bins)
new_order = argsort(ind)
ind = ind[new_order]
ordered_vals = vals[new_order]
# slower way of calculating first_hit (first version of this post)
# _,first_hit = unique(ind,return_index=True)
# faster way:
first_hit = searchsorted(ind,arange(1,nbins-1))
first_hit.sort()
#example of using the data:
for j in range(nbins-1):
#I am using a plotting function for your f, to show that they cluster
plot(ordered_vals[first_hit[j]:first_hit[j+1]],'o')
The figure shows that the bins are actually clusters as expected:
You can halve the computation time by sorting the array first, then use np.searchsorted.
vals = np.random.random(1e8)
vals.sort()
nbins = 100
bins = np.linspace(0, 1, nbins+1)
ind = np.digitize(vals, bins)
results = [func(vals[np.searchsorted(ind,j,side='left'):
np.searchsorted(ind,j,side='right')])
for j in range(1,nbins)]
Using 1e8 as my test case, I go from 34 seconds of computation to about 17.
One efficient solution is using the numpy_indexed package (disclaimer: I am its author):
import numpy_indexed as npi
npi.group_by(ind).split(vals)
Pandas has a very fast grouping code (I think it's written in C), so if you don't mind loading the library you could do that :
import pandas as pd
pdata=pd.DataFrame({'vals':vals,'ind':ind})
resultsp = pdata.groupby('ind').sum().values
or more generally :
pdata=pd.DataFrame({'vals':vals,'ind':ind})
resultsp = pdata.groupby('ind').agg(func).values
Although the latter is slower for standard aggregation functions
(like sum, mean, etc)
In my model, I need to obtain the value of my deterministic variable from a set of parent variables using a complicated python function.
Is it possible to do that?
Following is a pyMC3 code which shows what I am trying to do in a simplified case.
import numpy as np
import pymc as pm
#Predefine values on two parameter Grid (x,w) for a set of i values (1,2,3)
idata = np.array([1,2,3])
size= 20
gridlength = size*size
Grid = np.empty((gridlength,2+len(idata)))
for x in range(size):
for w in range(size):
# A silly version of my real model evaluated on grid.
Grid[x*size+w,:]= np.array([x,w]+[(x**i + w**i) for i in idata])
# A function to find the nearest value in Grid and return its product with third variable z
def FindFromGrid(x,w,z):
return Grid[int(x)*size+int(w),2:] * z
#Generate fake Y data with error
yerror = np.random.normal(loc=0.0, scale=9.0, size=len(idata))
ydata = Grid[16*size+12,2:]*3.6 + yerror # ie. True x= 16, w= 12 and z= 3.6
with pm.Model() as model:
#Priors
x = pm.Uniform('x',lower=0,upper= size)
w = pm.Uniform('w',lower=0,upper =size)
z = pm.Uniform('z',lower=-5,upper =10)
#Expected value
y_hat = pm.Deterministic('y_hat',FindFromGrid(x,w,z))
#Data likelihood
ysigmas = np.ones(len(idata))*9.0
y_like = pm.Normal('y_like',mu= y_hat, sd=ysigmas, observed=ydata)
# Inference...
start = pm.find_MAP() # Find starting value by optimization
step = pm.NUTS(state=start) # Instantiate MCMC sampling algorithm
trace = pm.sample(1000, step, start=start, progressbar=False) # draw 1000 posterior samples using NUTS sampling
print('The trace plot')
fig = pm.traceplot(trace, lines={'x': 16, 'w': 12, 'z':3.6})
fig.show()
When I run this code, I get error at the y_hat stage, because the int() function inside the FindFromGrid(x,w,z) function needs integer not FreeRV.
Finding y_hat from a pre calculated grid is important because my real model for y_hat does not have an analytical form to express.
I have earlier tried to use OpenBUGS, but I found out here it is not possible to do this in OpenBUGS. Is it possible in PyMC ?
Update
Based on an example in pyMC github page, I found I need to add the following decorator to my FindFromGrid(x,w,z) function.
#pm.theano.compile.ops.as_op(itypes=[t.dscalar, t.dscalar, t.dscalar],otypes=[t.dvector])
This seems to solve the above mentioned issue. But I cannot use NUTS sampler anymore since it needs gradient.
Metropolis seems to be not converging.
Which step method should I use in a scenario like this?
You found the correct solution with as_op.
Regarding the convergence: Are you using pm.Metropolis() instead of pm.NUTS() by any chance? One reason this could not converge is that Metropolis() by default samples in the joint space while often Gibbs within Metropolis is more effective (and this was the default in pymc2). Having said that, I just merged this: https://github.com/pymc-devs/pymc/pull/587 which changes the default behavior of the Metropolis and Slice sampler to be non-blocked by default (so within Gibbs). Other samplers like NUTS that are primarily designed to sample the joint space still default to blocked. You can always explicitly set this with the kwarg blocked=True.
Anyway, update pymc with the most recent master and see if convergence improves. If not, try the Slice sampler.
I have an array where discreet sinewave values are recorded and stored. I want to find the max and min of the waveform. Since the sinewave data is recorded voltages using a DAQ, there will be some noise, so I want to do a weighted average. Assuming self.yArray contains my sinewave values, here is my code so far:
filterarray = []
filtersize = 2
length = len(self.yArray)
for x in range (0, length-(filtersize+1)):
for y in range (0,filtersize):
summation = sum(self.yArray[x+y])
ave = summation/filtersize
filterarray.append(ave)
My issue seems to be in the second for loop, where depending on my averaging window size (filtersize), I want to sum up the values in the window to take the average of them. I receive an error saying:
summation = sum(self.yArray[x+y])
TypeError: 'float' object is not iterable
I am an EE with very little experience in programming, so any help would be greatly appreciated!
The other answers correctly describe your error, but this type of problem really calls out for using numpy. Numpy will run faster, be more memory efficient, and is more expressive and convenient for this type of problem. Here's an example:
import numpy as np
import matplotlib.pyplot as plt
# make a sine wave with noise
times = np.arange(0, 10*np.pi, .01)
noise = .1*np.random.ranf(len(times))
wfm = np.sin(times) + noise
# smoothing it with a running average in one line using a convolution
# using a convolution, you could also easily smooth with other filters
# like a Gaussian, etc.
n_ave = 20
smoothed = np.convolve(wfm, np.ones(n_ave)/n_ave, mode='same')
plt.plot(times, wfm, times, -.5+smoothed)
plt.show()
If you don't want to use numpy, it should also be noted that there's a logical error in your program that results in the TypeError. The problem is that in the line
summation = sum(self.yArray[x+y])
you're using sum within the loop where your also calculating the sum. So either you need to use sum without the loop, or loop through the array and add up all the elements, but not both (and it's doing both, ie, applying sum to the indexed array element, that leads to the error in the first place). That is, here are two solutions:
filterarray = []
filtersize = 2
length = len(self.yArray)
for x in range (0, length-(filtersize+1)):
summation = sum(self.yArray[x:x+filtersize]) # sum over section of array
ave = summation/filtersize
filterarray.append(ave)
or
filterarray = []
filtersize = 2
length = len(self.yArray)
for x in range (0, length-(filtersize+1)):
summation = 0.
for y in range (0,filtersize):
summation = self.yArray[x+y]
ave = summation/filtersize
filterarray.append(ave)
self.yArray[x+y] is returning a single item out of the self.yArray list. If you are trying to get a subset of the yArray, you can use the slice operator instead:
summation = sum(self.yArray[x:y])
to return an iterable that the sum builtin can use.
A bit more information about python slices can be found here (scroll down to the "Sequences" section): http://docs.python.org/2/reference/datamodel.html#the-standard-type-hierarchy
You could use numpy, like:
import numpy
filtersize = 2
ysums = numpy.cumsum(numpy.array(self.yArray, dtype=float))
ylags = numpy.roll(ysums, filtersize)
ylags[0:filtersize] = 0.0
moving_avg = (ysums - ylags) / filtersize
Your original code attempts to call sum on the float value stored at yArray[x+y], where x+y is evaluating to some integer representing the index of that float value.
Try:
summation = sum(self.yArray[x:y])
Indeed numpy is the way to go. One of the nice features of python is list comprehensions, allowing you to do away with the typical nested for loop constructs. Here goes an example, for your particular problem...
import numpy as np
step=2
res=[np.sum(myarr[i:i+step],dtype=np.float)/step for i in range(len(myarr)-step+1)]