I am using scipy.stats.chi2_contingency method to get chi square statistics. We need to pass frequency table i.e. contingency table as parameter. But I have a feature vector and want to automatically generate the frequency table. Do we have any such function available?
I am doing it like this currently:
def contigency_matrix_categorical(data_series,target_series,target_val,indicator_val):
observed_freq={}
for targets in target_val:
observed_freq[targets]={}
for indicators in indicator_val:
observed_freq[targets][indicators['val']]=data_series[((target_series==targets)&(data_series==indicators['val']))].count()
f_obs=[]
var1=0
var2=0
for i in observed_freq:
var1=var1+1
var2=0
for j in observed_freq[i]:
f_obs.append(observed_freq[i][j]+5)
var2=var2+1
arr=np.array(f_obs).reshape(var1,var2)
c,p,dof,expected=chi2_contingency(arr)
return {'score':c,'pval':p,'dof':dof}
Where data series and target series are the columns values and the other two are the name of the indicator.
Can anyone help?
thanks
You can use pandas.crosstab to generate a contingency table from a DataFrame. From the documentation:
Compute a simple cross-tabulation of two (or more) factors. By default computes a frequency table of the factors unless an array of values and an aggregation function are passed.
Below is an usage example:
import numpy as np
import pandas as pd
from scipy.stats import chi2_contingency
# Some fake data.
n = 5 # Number of samples.
d = 3 # Dimensionality.
c = 2 # Number of categories.
data = np.random.randint(c, size=(n, d))
data = pd.DataFrame(data, columns=['CAT1', 'CAT2', 'CAT3'])
# Contingency table.
contingency = pd.crosstab(data['CAT1'], data['CAT2'])
# Chi-square test of independence.
c, p, dof, expected = chi2_contingency(contingency)
The following data table
generates the following contingency table
Then, scipy.stats.chi2_contingency(contingency) returns (0.052, 0.819, 1, array([[1.6, 0.4],[2.4, 0.6]])).
Related
I'm working with a dataframe with a column containing a np.array per row (in this case representing the mean waveform of brain recordings trought the time). I want to calculate the pearson correlation of this column (array by array).
This is my code
lenght = len(df.Mean)
Mean = []
for i in range(len(df.Mean)):
Mean.append(df.Mean[i])
Correlation_p = np.zeros((lenght,lenght))
P_Value_p = np.zeros((lenght,lenght))
for i in range(lenght):
for j in range(lenght):
Correlation_p[i][j],P_Value_p[i][j] = stats.pearsonr(df.Mean[i],df.Mean[j])
This works, but I want to know if there is a more pythonic way to do it, maybe using df.corr(). I tried but I failed in how to do it.
EDIT: the output of df.Mean.head()
0 [-0.2559348091247745, 0.02743063113723536, 0.3...
1 [-0.37025615099744325, -0.11299328141596175, 0...
2 [-1.0543681894876467, -0.8452798699354909, -0....
3 [-0.23527437766943646, -0.28657810260136585, -...
4 [0.45557980303095674, 0.6055674269814991, 0.74...
Name: Mean, dtype: object
The arrays that you would like to correlate seem in single cells of the DataFrame, if I am not mistaken. The following brings it in a format where each single array occupies a single column.
I made an data example that resembles the format of df.Mean.head():
df = pd.DataFrame({'x':[np.random.randint(0,5,10), np.random.randint(0,5,10), np.random.randint(0,5,10)]})
You can turn these arrays into columns using this:
df = pd.DataFrame(np.array(df['x'].tolist()).transpose())
Adapt the reshape parameters according to your own dimensions.
From there, it would be fairly straightforward.
A correlation matrix can be created by:
df.corr()
A visualization of the correlation matrix:
import matplotlib.pyplot as plt
plt.matshow(df.corr())
plt.show()
I have a dataframe which contains the results of different games played. I need to calculate the expected results(how many games result with the same score) with Poisson distribution then compare actual results with expected results.So, imagine I have 2 games that resulted in result = 2, 4 games resulted in result = 9 and so on. I need expected results corresponding to actual values in terms of number of games resulted in a certain result.
I calculated the mean of the results column which I read also is called the expected value. Plotted a histogram of actual results.
import pandas as pd
import numpy as np
# Game Results DataFrame
game_results = pd.DataFrame({"game_id":[56,57,58,59,60],"result":[0,9,4,6,8]})
print(game_results)
# Histogram for result column
result = game_results["result"]
plt.hist(result)
plt.xlabel("Result")
plt.ylabel("Number of Games")
plt.title("Result Histogram")
lamb = result.mean()
You can draw a random poisson distribution using np.random.poisson with your mean and number of observations i.e. len(game_results):
import numpy as np
game_results = pd.DataFrame({"game_id":[56,57,58,59,60],"result":[0,9,4,6,8]})
# Get the lambda
lamb = result.mean()
# Draw a random poisson distribution using the lambda
game_results["expected"] = np.random.poisson(lamb, len(game_results))
I've provided a sample data below. It contains 8x10 matrix which contains two-dimensional normal distribution. For ex, col1 and col2 is 1 set, col3/col4 is another and so on. I'm trying to calculate covariance of the individual set in python. So far, I've been unsuccessful and i'm new to python. However, below is what I've tried:
import pandas
import numpy
import matplotlib.pyplot as plg
data = pandas.read_excel("testfile.xlsx", header=None)
dataNpy = pandas.DataFrame.to_numpy(data)
mean = numpy.mean(dataNpy, axis=0)
dataAWithoutMean = dataNpy - mean
covB = numpy.cov(dataAWithoutMean)
print("cov is: " + str(covB))
I've been tasked to calculate 4 separate covariance matrices and plot the covariance value for each set. In addition, plot the variance of each set.
dataset:
5.583566716 -0.441667252 -0.663300181 -1.249623134 -6.530464227 -4.984165997 2.594874802 2.646629654
6.129721509 2.374902708 -2.583949571 -2.224729817 0.279965502 -0.850298098 -1.542499771 -2.686894831
5.793226266 1.133844629 -1.939493549 1.570726544 -2.125423302 -1.33966397 -0.42901856 -0.09814741
3.413049714 -0.1133744 -0.032092831 -0.122147373 2.063549449 0.685517481 5.887909556 4.056242954
-2.639701885 -0.716557389 -0.851273969 -0.522784614 -7.347432606 -2.653482175 1.043389849 0.774192416
-1.84827484 -0.636893709 -2.223488277 -1.227420764 0.253999505 0.540299783 -1.593071594 -0.70980532
0.754029441 1.427571018 5.486147486 2.956320758 2.054346142 1.939929175 -3.559875405 -3.074861749
2.009806308 1.916796155 7.820990369 2.953681659 2.071682641 0.105056782 -1.120995825 -0.036335483
1.875128481 1.785216268 -2.607698929 0.244415372 -0.793431956 -1.598343481 -2.120852679 -2.777871862
0.168442246 0.324606905 0.53741174 0.274617158 -2.99037756 -3.323958514 -3.288399345 -2.482277047
Thanks for helping in advance :)
Is this what you need?
import pandas
import numpy
import matplotlib.pyplot as plt
data = pandas.read_excel("Book1.xlsx", header=None)
mean = data.mean(axis=0)
dataAWithoutMean = data - mean
# Variance of each set
dataAWithoutMean.var()
# Covariance matrix
cov = dataAWithoutMean.cov()
plt.matshow(cov)
plt.show()
I have two three-dimensional arrays a and b with [time,lat,lon]. I want to correlate the time series of each grid cell like correlate(a[:,0,0],b[:,0,0]), correlate(a[:,0,1],b[:,0,1]), ... . I'm aiming for two correlations. One with the entire time series and one only where array a surpasses a certain threshold.
The datasets also include some missing values in the time series and I read in both datasets with Xarray. Correlations and masking are done using numpy.
At the moment I walk through each latitude and longitude, grabbing the time series, mask it to account for nan and the threshold and correlate them. My code looks like this:
def correlate(A, B, var1, var2, TH):
name = "corr_"+var1+"_"+var2+"_TH_"+str(TH)+".nc"
a = xr.open_dataset(A).sel(time=slice('1950-03','2013-12'))
b = xr.open_dataset(B).sel(time=slice('1950-03','2013-12'))
corr = np.empty([a[var1].shape[1],a[var1].shape[2]],dtype=float)
corr_TH = corr
varname_TH = "r_TH_"+str(TH)
for lt in range(corr.shape[0]):
for ln in range(corr.shape[1]):
corr[lt,ln] = np.ma.corrcoef(a[var1][:,lt,ln],b[var2][:,lt,ln], rowvar=True)[0,1]
corr_TH[lt,ln] = np.ma.corrcoef(np.ma.masked_greater(a[var1][:,lt,ln],TH),b[var2][:,lt,ln], rowvar=True)[0,1]
# save whole correlations
ds = xr.Dataset({'r': (['lat', 'lon'], corr),varname_TH: (['lat', 'lon'], corr_TH)},coords={'lon': a['lon'],'lat': a['lat']})
return ds
This works in general but is super slow. I found the Xarray function array.stack() to flatten the arrays and tried something like:
A_stack = A.var1.stack(z=('lat','lon'))
B_stack = B.var2.stack(z=('lat','lon'))
cov = ((A_stack - A_stack.mean(axis=0))* (B_stack - B_stack.mean(axis=0))).mean(axis=0)
corr = cov / (A_stack.std(axis=0) * B_stack.std(axis=0))
The multi index 'z' over which the array is stacked is retained through the process, however, the correlation array in the end is empty. I suppose that's because of the Nans.
Does anyone have an idea of the do this?
thanks
I have a dataset in the form of a table:
Score Percentile
381 1
382 2
383 2
...
569 98
570 99
The complete table is here as a Google spreadsheet.
Currently, I am computing a score and then doing a lookup on this dataset (table) to find the corresponding percentile rank.
Is it possible to create a function to calculate the corresponding percentile rank for a given score using a formula instead of looking it up in the table?
It's impossible to recreate the function that generated a given table of data, if no information is provided about the process behind that data.
That being said, we can make some speculation.
Since it's a "percentile" function, it probably represents the cumulative value of a probability distribution of some sort. A very common probability distribution is the normal distribution, whose "cumulative" counterpart (i.e. its integral) is the so called "error function" ("erf").
In fact, your tabulated data looks a lot like an error function for a variable whose average value is 473.09:
your dataset: orange; fitted error function (erf): blue
However, the agreement is not perfect and that could be because of three reasons:
the fitting procedure I've used to generate the parameters for the error function didn't use the right constraints (because I have no idea what I'm modelling!)
your dataset doesn't represent an exact normal distribution, but rather real world data whose underlying distribution is the normal distribution. The features of your sample data that deviate from the model are being ignored altogether.
the underlying distribution is not a normal distribution at all, its integral just happens to look like the error function by chance.
There is literally no way for me to tell!
If you want to use this function, this is its definition:
import numpy as np
from scipy.special import erf
def fitted_erf(x):
c = 473.09090474
w = 37.04826334
return 50+50*erf((x-c)/(w*np.sqrt(2)))
Tests:
In [2]: fitted_erf(439) # 17 from the table
Out[2]: 17.874052406601457
In [3]: fitted_erf(457) # 34 from the table
Out[3]: 33.20270318344252
In [4]: fitted_erf(474) # 51 from the table
Out[4]: 50.97883169390196
In [5]: fitted_erf(502) # 79 from the table
Out[5]: 78.23955071273468
however I'd strongly advise you to check if a fitted function, made without knowledge of your data source, is the right tool for your task.
P.S.
In case you're interested, this is the code used to obtain the parameters:
import numpy as np
from scipy.special import erf
from scipy.optimize import curve_fit
tab=np.genfromtxt('table.csv', delimiter=',', skip_header=1)
# using a 'table.csv' file generated by Google Spreadsheets
x = tab[:,0]
y = tab[:,1]
def parametric_erf(x, c, w):
return 50+50*erf((x-c)/(w*np.sqrt(2)))
pars, j = curve_fit(parametric_erf, x, y, p0=[475,10])
print(pars)
# outputs [ 473.09090474, 37.04826334]
and to generate the plot
import matplotlib.pyplot as plt
plt.plot(x,parametric_erf(x,*pars))
plt.plot(x,y)
plt.show()
Your question is quite vague but it seems whatever calculation you do ends up with a number in the range 381-570, is this correct. You have a multiline calculation which gives this number? I'm guessing you are repeating this in many places in your code which is why you want to procedurise it?
For any calculation you can wrap it in a function. For instance:
answer = variable_1 * variable_2 + variable_3
can be written as:
def calculate(v1, v2, v3):
''' calculate the result from the inputs
'''
return v1 * v2 + v3
answer = calculate(variable_1, variable_2, variable_3)
if you would like a definitive answer then simply post your calculation and I can make it into a function for you