Scipy spectrogram loop error - python

I want to run scipy.signal.spectrogram in a loop with different nperseg, noverlap, and nfft each time. However I got:
TypeError: 'numpy.float64' object cannot be interpreted as an integer
Here is what I wrote:
Fs=10e3
data = testData(Fs)
r = []
for i in numpy.linspace(-0.4, 0.4, 9):
t_step = 0.5+i
f_step = 0.5-i
window_length = round(2 * t_step * Fs)
noverlap = round(t_step * Fs)
nfft = round(Fs / f_step)
arr_f, arr_t, fft = scipy.signal.spectrogram(data, Fs,
nperseg=window_length,
noverlap=noverlap,
nfft=nfft,
window='hanning')
r.append((arr_f, arr_t, fft))
where testData is copied from spectrogram documentation,
Scipy version is 1.1.0.
When I run the same code with constant, hardcoded t_step and f_step (without +/- i) everything is going smoothly in the whole range. So here are my questions:
Why is it not working?
Is there a way not to do it manually?
Full Tracback:
File "/Users/desktop/test.py", line 34, in main window='hanning')
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scipy/signal/spectral.py", line 691, in spectrogram input_length=x.shape[axis])
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scipy/signal/spectral.py", line 1775, in _triage_segments win = get_window(window, nperseg)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site packages/scipy/signal/windows/windows.py", line 2106, in get_window return winfunc(*params)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scipy/signal/windows/windows.py", line 786, in hann return general_hamming(M, 0.5, sym)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scipy/signal/windows/windows.py", line 1016, in general_hamming return general_cosine(M, [alpha, 1. - alpha], sym)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scipy/signal/windows/windows.py", line 116, in general_cosine w = np.zeros(M)
TypeError: 'numpy.float64' object cannot be interpreted as an integer

Your calculations of tstep and fstep produce float numbers, but scipy expects integers. You could for instance change your code to
arr_f, arr_t, fft = signal.spectrogram(data, Fs,
nperseg=window_length.astype(int),
noverlap=noverlap.astype(int),
nfft=nfft.astype(int),
window='hanning')
and scipy should work without problems. The .astype(int) just tweaks the numpy datatype, so instead of the float number 2000.0 scipy receives the integer number 2000. You can find more information about numpy data types in the official documentation.
A better way is of course to change your calculations, so that they produce integers numbers right away.

Related

Error using np.arange() in t_span with solve_ivp error in Scipy 1.8.0 but not 1.5.0

For the following input
neuron_dict = {'param_set': sb.morris_lecar_defaults(V_3 = 11.96), 'time_range': (0, 10000, 0.0001), 'initial_cond': (-3.06560496e+01, 7.33832272e-03, 8.35251563e-01), 'stretch': 4.2, 'track_event': sb.voltage_passes_threshold ,'location': np.array([20,0,100])}
sb.ivp_solver(sb.morris_lecar, time_range = neuron_dict['time_range'], initial_cond = neuron_dict['initial_cond'], params = neuron_dict['param_set'], track_event = neuron_dict['track_event'])
def ivp_solver(system_of_equations: callable, time_range: tuple, initial_cond: tuple,params: callable = morris_lecar_defaults(), track_event: callable = voltage_passes_threshold, numerical_method = 'BDF', rtol = 1e-8) -> object:
track_event.direction = 1
sol = solve_ivp(system_of_equations, time_range, initial_cond, args=(params,), events= track_event, t_eval= np.arange(time_range[0], time_range[1], time_range[2]), method = numerical_method, rtol = rtol)
return sol
solve_ivp fails with the following output for Scipy version 1.8.0 with traceback:
Traceback (most recent call last):
File "...MEA_foward_model.py", line 438, in <module>
main()
File "...MEA_foward_model.py", line 430, in main
near_synchronous_dual_bursting_example()
File "...MEA_foward_model.py", line 377, in near_synchronous_dual_bursting_example
ts, voltages, currents, time_events, y_events = integrate_neurons(neurons_list)
File "...MEA_foward_model.py", line 185, in integrate_neurons
sol = sb.ivp_solver(sb.morris_lecar, time_range = neuron_dict['time_range'], initial_cond = neuron_dict['initial_cond'], params = neuron_dict['param_set'], track_event = neuron_dict['track_event'])
File "...\Square_bursting_oscillations.py", line 106, in ivp_solver
sol = solve_ivp(system_of_equations, time_range, initial_cond, args=(params,), events= track_event, t_eval= np.arange(time_range[0], time_range[1], time_range[2]), method = numerical_method, rtol = rtol)
File "C:\Anaconda\envs\test\lib\site-packages\scipy\integrate\_ivp\ivp.py", line 512, in solve_ivp
t0, tf = map(float, t_span)
ValueError: too many values to unpack (expected 2)
but in Scipy version 1.5.0 (my base interpreter) it runs without issue. Looking at the line highlighted in the traceback in ivp.py:
Scipy 1.8.0: t0, tf = map(float, t_span) vs Scipy 1.5.0: t0, tf = float(t_span[0]), float(t_span[1])
not sure if this has any bearing on the reason why it failed but it is odd that Scipy 1.8.0 doesn't accept the same input. I would like to use np.arange(start, finish, interval) for my integration, is there any reason why this is failing?
According to the docs
t_span
2-tuple of floats
Interval of integration (t0, tf). The solver starts with t=t0 and
integrates until it reaches t=tf.
For a 2 element tuple, these are the same:
t0, tf = map(float, t_span)
t0, tf = float(t_span[0]), float(t_span[1])
But the first raises this error when t_span is longer than 2. The second just ignores the additional values. It's the t0,tf=... unpacking that enforces the 2-element requirement.
Maybe you want to give t_eval the longer arange, and t_span just the end points.

Problems faced during calculating FFT of wav files

I have been trying to find FFT transform of .wav files. My initial program is (For an AMplitude - TIme Plot)
data_dir = 'C:/Users/asus/Desktop/Song_Test/Split/Done1.wav'
audio1, sfreq = lr.load(data_dir)
len(audio1), sfreq
Duration = len(audio1)/sfreq
print(Duration, " seconds")
time = np.arange(0, len(audio1)) / sfreq
fig, ax = plt.subplots()
ax.plot(time, audio1)
ax.set(xlabel='Time (s)', ylabel='Sound Amplitude')
plt.show()
Here is the function I have till now programed.
import scipy
def fft_plot(audio, sampling_rate):
n = int(len(audio))
T = 1 / sampling_rate
yf = scipy.fft.fft(audio)
print(n, T)
xf = np.linspace(0.0, 1.0/(2.0*T), n/2.0)
fig, ax = plt.subplot()
ax.plot(xf, 2.0/n * np.abs(yf[:n//2]))
plt.grid()
plt.xlabel("Freq")
plt.ylabel("Magnitude")
return plt.show()
The moment I call this module, using fft_plot(audio1, sfreq)
The following error pops up
Traceback (most recent call last):
File "C:\Users\asus\anaconda3\envs\untitled\lib\site-packages\numpy\core\function_base.py", line 117, in linspace
num = operator.index(num)
TypeError: 'float' object cannot be interpreted as an integer
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/asus/PycharmProjects/untitled/Librosa_level2.py", line 92, in <module>
fft_plot(audio1, sfreq)
File "C:/Users/asus/PycharmProjects/untitled/Librosa_level2.py", line 59, in fft_plot
xf = np.linspace(0.0, 1.0/(2.0*T), n//2.0)
File "<__array_function__ internals>", line 6, in linspace
File "C:\Users\asus\anaconda3\envs\untitled\lib\site-packages\numpy\core\function_base.py", line 121, in linspace
.format(type(num)))
TypeError: object of type <class 'float'> cannot be safely interpreted as an integer.
How can I sort out this float problem, kindly help me?
The third argument to:
xf = np.linspace(0.0, 1.0/(2.0*T), n/2.0)
that is n / 2.0 is supposed to be an integer:
num : int, optional
Number of samples to generate. Default is 50. Must be non-negative.
Check the docs for details.
Your n is an integer, but when you divide by 2.0 you can a fraction (real number). In Python terms (and vast majority of other programming languages), you will always get a float if you divide integer by a float.
Solution
Make sure you pass an even number, e.g.:
if n % 2 == 0:
pass # Even
else:
n -= 1

ValueError: scale < 0 during normalization by using gaussian distribution function

I'm trying to read my text file and extract 3 main parameters and put them in separate list and apply normalizing on lists of parameters which are (Temperature, Speed, Acceleration) after assigning Gaussian distribution function. For getting good result I split up positive and negative numbers of each parameters' list and apply gaussian distribution function and pick mean value of negative numbers as the real Minimum and pick mean value of positive numbers as the real Maximum instead of directly find Min and Max values in main list of these parameters which could repeat few times due to they're not in desired confidence interval. The problem is I faced RunTimeWarning error which I avoided already but still I have below error(s) which I don't have any clue how I can solve them includes ValueError: scale <0 , hope that someone has good idea about solution for errors ot better way to apply normalization by using gaussian distribution function Thanks for your attention:
File "c:\Users\majm\.vscode\extensions\ms-python.python-2018.11.0\pythonFiles\experimental\ptvsd_launcher.py", line 45, in <module>
main(ptvsdArgs)
File "c:\Users\majm\.vscode\extensions\ms-python.python-2018.11.0\pythonFiles\experimental\ptvsd\ptvsd\__main__.py", line 265, in main
wait=args.wait)
File "c:\Users\majm\.vscode\extensions\ms-python.python-2018.11.0\pythonFiles\experimental\ptvsd\ptvsd\__main__.py", line 258, in handle_args
debug_main(addr, name, kind, *extra, **kwargs)
File "c:\Users\majm\.vscode\extensions\ms-python.python-2018.11.0\pythonFiles\experimental\ptvsd\ptvsd\_local.py", line 45, in debug_main
run_file(address, name, *extra, **kwargs)
File "c:\Users\majm\.vscode\extensions\ms-python.python-2018.11.0\pythonFiles\experimental\ptvsd\ptvsd\_local.py", line 79, in run_file
run(argv, addr, **kwargs)
File "c:\Users\majm\.vscode\extensions\ms-python.python-2018.11.0\pythonFiles\experimental\ptvsd\ptvsd\_local.py", line 140, in _run
_pydevd.main()
File "c:\Users\majm\.vscode\extensions\ms-python.python-2018.11.0\pythonFiles\experimental\ptvsd\ptvsd\_vendored\pydevd\pydevd.py", line 1925, in main
debugger.connect(host, port)
File "c:\Users\majm\.vscode\extensions\ms-python.python-2018.11.0\pythonFiles\experimental\ptvsd\ptvsd\_vendored\pydevd\pydevd.py", line 1283, in run
return self._exec(is_module, entry_point_fn, module_name, file, globals, locals)
File "c:\Users\majm\.vscode\extensions\ms-python.python-2018.11.0\pythonFiles\experimental\ptvsd\ptvsd\_vendored\pydevd\pydevd.py", line 1290, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "c:\Users\majm\.vscode\extensions\ms-python.python-2018.11.0\pythonFiles\experimental\ptvsd\ptvsd\_vendored\pydevd\_pydev_imps\_pydev_execfile.py", line 25, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "p:\Desktop\correctt\news.py", line 142, in <module>
plotgaussianfunction(t_p_mean, t_sigma_Positive)
File "p:\Desktop\correctt\news.py", line 58, in plotgaussianfunction
s = np.random.normal(mu, sigma,1000)
File "mtrand.pyx", line 1656, in mtrand.RandomState.normal
ValueError: scale < 0
So my code is:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy
import warnings
warnings.filterwarnings("ignore",category =RuntimeWarning)
df = pd.read_csv('D:/me.txt', header=None)
id_set = df[df.index % 4 == 0].astype('int').values
speed = df[df.index % 4 == 1].values
acceleration = df[df.index % 4 == 2].values
temperature = df[df.index % 4 == 3].values
m_data={'p_Speed': s_p_results[:,0],'n_Speed': s_n_results[:,0], 'p_Acceleration': a_p_results[:,0],'n_Acceleration': a_n_results[:,0], 'p_Temperature': t_p_results[:,0],'n_Temperature': t_n_results[:,0]}
m_main_data = pd.DataFrame(data, columns=['Speed','Acceleration','Temperature'], index = id_set[:,0])
data = {'Speed': speed[:,0], 'Acceleration': acceleration[:,0], 'Temperature': temperature[:,0]}
main_data = pd.DataFrame(data, columns=['Speed','Acceleration','Temperature'], index = id_set[:,0])
main_data = main_data.replace([np.inf, -np.inf], np.nan)
def normalize(value, min_value, max_value, min_norm, max_norm):
new_value = ((max_norm - min_norm)*((value - min_value)/(max_value - min_value))) + min_norm
return new_value
def createpositiveandnegativelist(listtocreate):
l_negative = []
l_positive = []
for value in listtocreate:
if (value < 0):
l_negative.append(value)
elif (value > 0):
l_positive.append(value)
#print(t_negative)
#print(t_positive)
return l_negative,l_positive
def calculatemean(listtocalculate):
return sum(listtocalculate)/len(listtocalculate)
def plotgaussianfunction(mu,sigma):
s = np.random.normal(mu, sigma,1000)
abs(mu - np.mean(s))<0.01
abs(sigma - np.std(s,ddof=1))<0.01
#count, bins, ignored = plt.hist(s,30,density=True)
#plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * np.exp(-(bins-mu)**2/(2*sigma**2)),linewidth=2, color= 'r')
#plt.show()
return
def plotboundedCI(s, mu, sigma, lists):
plt.figure()
count, bins, ignored = plt.hist(s,30,density=True)
plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * np.exp(-(bins-mu)**2/(2*sigma**2)),linewidth=2, color= 'r')
#confidential interval calculation
ci = scipy.stats.norm.interval(0.68, loc = mu, scale = sigma)
#confidence interval for left line
one_x12, one_y12 = [ci[0],ci[0]], [0,3]
#confidence interval for right line
two_x12, two_y12 = [ci[1],ci[1]], [0,3]
plt.title("Gaussian 68% Confidence Interval", fontsize=12, color='black', loc='left', style='italic')
plt.plot(one_x12, one_y12, two_x12, two_y12, marker = 'o')
plt.show()
results = []
for value in lists:
if(ci[0]< value <ci[1]):
results.append(value)
else:
#print("NOT WANTED: ",value)
pass
return results
t_negative, t_positive = createpositiveandnegativelist(temperature)
a_negative, a_positive = createpositiveandnegativelist(acceleration)
s_negative, s_positive = createpositiveandnegativelist(speed)
#calculating the mean value
t_p_mean = calculatemean(t_positive)
a_p_mean = calculatemean(a_positive)
s_p_mean = calculatemean(s_positive)
t_n_mean = calculatemean(t_negative)
a_n_mean = calculatemean(a_negative)
s_n_mean = calculatemean(s_negative)
#calculating the sigma value
t_sigma_Negative = np.std(t_negative)
t_sigma_Positive = np.std(t_positive)
a_sigma_Negative = np.std(t_negative)
a_sigma_Positive = np.std(t_positive)
s_sigma_Negative = np.std(t_negative)
s_sigma_Positive = np.std(t_positive)
#plot the gaussian function with histograms
plotgaussianfunction(t_p_mean, t_sigma_Positive)
plotgaussianfunction(t_n_mean, t_sigma_Negative)
plotgaussianfunction(a_p_mean, a_sigma_Positive)
plotgaussianfunction(a_n_mean, a_sigma_Negative)
plotgaussianfunction(s_p_mean, s_sigma_Positive)
plotgaussianfunction(s_n_mean, s_sigma_Negative)
#normalization
t_p_s = np.random.normal(t_p_mean, t_sigma_Positive,1000)
t_n_s = np.random.normal(t_n_mean, t_sigma_Negative,1000)
a_p_s = np.random.normal(a_p_mean, a_sigma_Positive,1000)
a_n_s = np.random.normal(a_n_mean, a_sigma_Negative,1000)
s_p_s = np.random.normal(s_p_mean, s_sigma_Positive,1000)
s_n_s = np.random.normal(s_n_mean, s_sigma_Negative,1000)
#histograms minus the outliers
t_p_results = plotboundedCI(t_p_s, t_p_mean, t_sigma_Positive, t_positive)
t_n_results = plotboundedCI(t_n_s, t_n_mean, t_sigma_Negative, t_negative)
a_p_results = plotboundedCI(a_p_s, a_p_mean, a_sigma_Positive, a_positive)
a_n_results = plotboundedCI(a_n_s, a_n_mean, a_sigma_Negative, a_negative)
s_p_results = plotboundedCI(s_p_s, s_p_mean, s_sigma_Positive, s_positive)
s_n_results = plotboundedCI(s_n_s, s_n_mean, s_sigma_Negative, s_negative)
Note: I have some missing data(nan or inf) in my list of values which are already replaced by zero! but considering that when I have no missing values in my list of parameters , the code works!
from documentation of numpy.random.normal:
Parameters:
loc : float or array_like of floats
Mean (“centre”) of the distribution.
scale : float or array_like of floats
Standard deviation (spread or “width”) of the distribution.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. If size is None (default), a single value is returned if loc and scale are both scalars. Otherwise, np.broadcast(loc, scale).size samples are drawn.
the scale is the Standard deviation of the distribution hence it can not be negative. Hence the error you get: ValueError: scale < 0
you may want to check the sign of this parameter. give it a try with:
s = np.random.normal(mu, np.abs(sigma),1000)

How to change the sampling rate of the data

How to change the sampling rate of the data in the list result.
The current sampling rate is 256, but the desired sampling rate is 250.
Given:
result - a list of data with a sampling frequency of 256.
buf.size - the amount of signal in the channel
I tried to use scipy.signal.resample
from scipy import signal
result250 = signal.resample(result, buf.size, t=None, axis=0, window=None)
Traceback (most recent call last):
File "****.py", line 82, in <module>
result250 = signal.resample(result, buf.size, t=None, axis=0, window=None)
File "****\scipy\signal\signaltools.py", line 1651, in resample
X = fft(x, axis=axis)
File "****\scipy\fftpack\basic.py", line 249, in fft
tmp = _asfarray(x)
File "****\scipy\fftpack\basic.py", line 134, in _asfarray
return numpy.asfarray(x)
File "****\numpy\lib\type_check.py", line 105, in asfarray
return asarray(a, dtype=dtype)
File "****\numpy\core\numeric.py", line 482, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: invalid literal for float(): 3.3126, 6.6876, 9.3126, 10.0627, ****
There is another option of linear interpolation (preferable), but I also can not figure out how to use.
scipy.interpolate.interp1d
You must record the time (in seconds - f.file_duration).
My answer:
x250 = np.arange(0, f.file_duration, 0.004) #frequency 250
x256 = np.arange(0, f.file_duration, 0.00390625) #frequency 256
result256 = np.interp(x256, x250, result)
I'll try here since the comments are small and I think I'll answer you:
As I was saying python lists are just that, lists, a collection of stuff (not necessarily numerical values) they don't know or care about what is inside and as such they don't know what sampling frequency even means.
Once you accept that the numbers in your list are just a representation of stuff you can sample them at whatever rate you want, it's just a matter of what you plot it against or how many values you consider per unit time.
import numpy as np
import matplotlib.pyplot as plt
data = [3.3126, 6.6876, 9.3126, 10.0627, 9.0626, 6.6876, 4.0626, 2.0625, 0.9375, 0.5625, 0.4375, 0.3125, 0.1875, 0.1875, 0.9375, 2.4375, 4.5626, 6.6876, 7.9376, 7.3126, 4.9376, 1.0625, -3.3126, -6.9376, -8.9376, -8.6876, -6.5626, -3.1875, 0.3125, 2.6875, 3.5626, 2.6875, 0.5625, -2.0625, -4.3126, -5.6876, -5.9376, -5.3126, -4.4376, -3.6876, -3.4376, -3.5626, -3.6876, -3.4376, -2.6875, -1.4375, -0.5625, -0.4375, -1.4375, -3.3126, -5.3126, -6.5626, -6.4376, -5.1876, -3.5626, -2.6875, -3.0625, -4.4376, -5.9376, -6.3126, -5.3126, -2.9375, -0.1875]
x256 = np.arange(0, 1, 1/256)[:len(data)]
x200 = np.arange(0, 1, 1/200)[:len(data)]
plt.plot(x256, data, label='x256')
plt.plot(x200, data, label='x200')
plt.legend()
Output:
Does this solve your resampling problem?

Reading in a text file as a numpy array with float values

I do not know python at all thus I have been unsuccessful in interpreting similar previous answers and using them.
I have a python script that I wish to execute in unix. The script uses an input file but I do not understand how to ensure that the input file is read as numpy float array.
My input file is called chk.bed and it has one column of numeric values
-bash-4.1$ # head chk.bed
7.25236
0.197037
0.189464
2.60056
0
32.721
11.3978
3.85692
0
0
The original script is -
from scipy.stats import gaussian_kde
import numpy as np
#assume "fpkm" is a NumPy array of log2(fpkm) values
kernel = gaussian_kde(fpkm)
xi = np.linspace(fpkm.min(), fpkm.max(), 100)
yi = kernel.evaluate(xi)
mu = xi[np.argmax(yi)]
U = fpkm[fpkm > mu].mean()
sigma = (U - mu) * np.sqrt(np.pi / 2)
zFPKM = (fpkm - mu) / sigma
What I could understand up until now is to make sure the script is reading the file so I included fpkm = open("chk.bed", 'r') in the code.
However on executing the code - I get the following error -
Traceback (most recent call last):
File "./calc_zfpkm.py", line 10, in <module>
kernel = gaussian_kde(fpkm)
File "/usr/lib64/python2.6/site-packages/scipy/stats/kde.py", line 88, in __init__
self._compute_covariance()
File "/usr/lib64/python2.6/site-packages/scipy/stats/kde.py", line 340, in _compute_covariance
self.factor * self.factor)
File "/usr/lib64/python2.6/site-packages/numpy/lib/function_base.py", line 1971, in cov
X = array(m, ndmin=2, dtype=float)
TypeError: float() argument must be a string or a number
This seems to suggest that I am not reading in the file correctly and so the function gaussian_kde() cannot read in the values as float.
Can you please help ?
Thanks !
You're passing a file object to gaussian_kde but it expects a NumPy array, you need to use numpy.loadtxt first to load the data in an array:
>>> import numpy as np
>>> arr = np.loadtxt('chk.bed')
>>> arr
array([ 7.25236 , 0.197037, 0.189464, 2.60056 , 0. ,
32.721 , 11.3978 , 3.85692 , 0. , 0. ])
>>> gaussian_kde(arr)
<scipy.stats.kde.gaussian_kde object at 0x7f7350390190>
Here you can find the
R script for zFPKM normalization.
I inspired from the python code which has given above and also at this link:https://www.biostars.org/p/94680/
install.packages("ks","pracma")
library(ks)
library(pracma)
/* fpkm is an example data */
fpkm <- c(1,2,3,4,5,6,7,8,4,5,6,5,6,5,6,5,5,5,5,6,6,78,8,89,8,8,8,2,2,2,1,1,4,4,4,4,4,4,4,4,4,4,4,3,2,2,3,23,2,3,23,4,2,2,4,23,2,2,24,4,4,2,2,4,4,4,2,2,4,4,2,2,4,2,45,5,5,5,3,2,2,4,4,4,4,4,4,4,4,4,3,2,2,3,23,2,3,23,4,2,2,4,23,2,2,24,4,4,2,2,4,4,4,2,2,4,4,2,2,4,2,45,5,5,5,3,2,2)
xi=linspace(min(fpkm),max(fpkm),100)
fhat = kde(x=fpkm,gridsize=100,eval.points=xi)
/* here I put digits=0. if I you do not round the numbers(yi) the results are a little bit changing.*/
yi=round(fhat$estimate,digits=0)
mu=xi[which.max(yi)]
U=mean(fpkm[fpkm>mu])
sigma=(U-mu)* (sqrt(pi/2))
zFPKM = (fpkm - mu) / sigma
Btw, I have a question.
Can I apply the same approach to RPKM?
Cankut CUBUK
Computational Genomics Program - Systems Genomics Lab
Centro de Investigación Príncipe Felipe (CIPF)
C/ Eduardo Primo Yúfera nº3
46012 Valencia, Spain
http://bioinfo.cipf.es ​

Categories