How to change the sampling rate of the data - python

How to change the sampling rate of the data in the list result.
The current sampling rate is 256, but the desired sampling rate is 250.
Given:
result - a list of data with a sampling frequency of 256.
buf.size - the amount of signal in the channel
I tried to use scipy.signal.resample
from scipy import signal
result250 = signal.resample(result, buf.size, t=None, axis=0, window=None)
Traceback (most recent call last):
File "****.py", line 82, in <module>
result250 = signal.resample(result, buf.size, t=None, axis=0, window=None)
File "****\scipy\signal\signaltools.py", line 1651, in resample
X = fft(x, axis=axis)
File "****\scipy\fftpack\basic.py", line 249, in fft
tmp = _asfarray(x)
File "****\scipy\fftpack\basic.py", line 134, in _asfarray
return numpy.asfarray(x)
File "****\numpy\lib\type_check.py", line 105, in asfarray
return asarray(a, dtype=dtype)
File "****\numpy\core\numeric.py", line 482, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: invalid literal for float(): 3.3126, 6.6876, 9.3126, 10.0627, ****
There is another option of linear interpolation (preferable), but I also can not figure out how to use.
scipy.interpolate.interp1d

You must record the time (in seconds - f.file_duration).
My answer:
x250 = np.arange(0, f.file_duration, 0.004) #frequency 250
x256 = np.arange(0, f.file_duration, 0.00390625) #frequency 256
result256 = np.interp(x256, x250, result)

I'll try here since the comments are small and I think I'll answer you:
As I was saying python lists are just that, lists, a collection of stuff (not necessarily numerical values) they don't know or care about what is inside and as such they don't know what sampling frequency even means.
Once you accept that the numbers in your list are just a representation of stuff you can sample them at whatever rate you want, it's just a matter of what you plot it against or how many values you consider per unit time.
import numpy as np
import matplotlib.pyplot as plt
data = [3.3126, 6.6876, 9.3126, 10.0627, 9.0626, 6.6876, 4.0626, 2.0625, 0.9375, 0.5625, 0.4375, 0.3125, 0.1875, 0.1875, 0.9375, 2.4375, 4.5626, 6.6876, 7.9376, 7.3126, 4.9376, 1.0625, -3.3126, -6.9376, -8.9376, -8.6876, -6.5626, -3.1875, 0.3125, 2.6875, 3.5626, 2.6875, 0.5625, -2.0625, -4.3126, -5.6876, -5.9376, -5.3126, -4.4376, -3.6876, -3.4376, -3.5626, -3.6876, -3.4376, -2.6875, -1.4375, -0.5625, -0.4375, -1.4375, -3.3126, -5.3126, -6.5626, -6.4376, -5.1876, -3.5626, -2.6875, -3.0625, -4.4376, -5.9376, -6.3126, -5.3126, -2.9375, -0.1875]
x256 = np.arange(0, 1, 1/256)[:len(data)]
x200 = np.arange(0, 1, 1/200)[:len(data)]
plt.plot(x256, data, label='x256')
plt.plot(x200, data, label='x200')
plt.legend()
Output:
Does this solve your resampling problem?

Related

How to use xarray to group by time and then run a bin function on the groups?

I have a multidimensional 'mean direction of total ocean swell' (mdts), netCDF data set. The dimensions are time (in hours), latitude, and longitude. I simply wish to group the hourly data by day and then for each day, for each lat/lon grid, determine which of 16 predefined directional bins contains the most hours (maximum could be 24). The direction value associated with the bin with the most hours, for each lat/lon grid, would then be assigned as the direction for that particular day, for each lat/lon grid. I'm applying a custom function to the groupby command and that is where the error is occurring. I think I'm not understanding what is being passed to the function.
Note: each netCDF file represents 1979-2019 for one month. Therefore, I'm using groupby instead of resample as resample adds the 11 other months not in the file. I also first converted all the hours to 00:00 so that groupby would work for grouping by days.
Note: my actual code is set to loop through several netCDF files. I've simplified it here for one file.
My simplified code:
import numpy as np
import xarray as xr
ifile = 'mean_direction_total_swell_Nov_1979_2019_hourly.nc'
# min, max, and center values of angle direction bins
min = [348.75, 11.25, 33.75, 56.25, 78.75, 101.25, 123.75, 146.25, 168.75, 191.25, 213.75, 236.25, 258.75, 281.25, 303.75, 326.25]
max = [ 11.25, 33.75, 56.25, 78.75, 101.25, 123.75, 146.25, 168.75, 191.25, 213.75, 236.25, 258.75, 281.25, 303.75, 326.25, 348.75]
dir = [ 0.0, 22.5, 45.0, 67.5, 90.0, 112.5, 135.0, 157.5, 180.0, 202.5, 225.0, 247.5, 270.0, 292.5, 315.0, 337.5]
# custom function that I think is causing the problem
def bins(x):
bins_n = np.zeros([16], dtype=int)
# North bin requires 'or' statement
if(x >= min[0] or x < max[0]): bins_n[0] = bins_n[0] + 1
# other bins require 'and' statement
for i in range(1,16,1): # bins
if(x >= min[i] and x < max[i]):
bins_n[i] = bins_n[i] + 1
break
slot = np.argmax(bins_n)
return dir[slot]
idatanc = xr.open_dataset(ifile)
idata = idatanc['mdts']
idata.coords['time'] = idata.time.dt.floor('1D') # setting all hourly values to 0000
idata_dy = idata.groupby("time").apply(bins)
What gets returned. Note: this error is based on the looping program for multiple netCDF files so it may not correspond exactly to the code above. The errors are still the same.
Traceback (most recent call last):
File "<ipython-input-216-82adffe45690>", line 9, in <module>
idata_dy = idata.groupby("time").apply(bins)
File "C:\Users\TWHawk\Anaconda3\envs\tim_python36\lib\site-packages\xarray\core\groupby.py", line 815, in apply
return self.map(func, shortcut=shortcut, args=args, **kwargs)
File "C:\Users\TWHawk\Anaconda3\envs\tim_python36\lib\site-packages\xarray\core\groupby.py", line 800, in map
return self._combine(applied, shortcut=shortcut)
File "C:\Users\TWHawk\Anaconda3\envs\tim_python36\lib\site-packages\xarray\core\groupby.py", line 819, in _combine
applied_example, applied = peek_at(applied)
File "C:\Users\TWHawk\Anaconda3\envs\tim_python36\lib\site-packages\xarray\core\utils.py", line 183, in peek_at
peek = next(gen)
File "C:\Users\TWHawk\Anaconda3\envs\tim_python36\lib\site-packages\xarray\core\groupby.py", line 799, in <genexpr>
applied = (maybe_wrap_array(arr, func(arr, *args, **kwargs)) for arr in grouped)
File "<ipython-input-215-3d060f71ca15>", line 6, in bins
if(x >= min[0] or x < max[0]): bins_n[0] = bins_n[0] + 1
File "C:\Users\TWHawk\Anaconda3\envs\tim_python36\lib\site-packages\xarray\core\common.py", line 119, in __bool__
return bool(self.values)
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
I didn't check the results all the way but I think the code bellows does what you need:
import numpy as np
import xarray as xr
from scipy import stats
def func(x, axis):
mode, count = np.apply_along_axis(stats.mode, axis, x)
return mode.squeeze()
infile = 'mean_direction_total_swell_Nov_1979_2019_hourly.nc'
ds = xr.open_dataset(infile)
# make sure range is 0 <= x < 360
ds['mdts'] = np.mod(ds['mdts'], 360)
# bin the data in 16 directions (17 actually, North appears as the first and
# last bin)
step = 360 / 16
centers = np.r_[np.r_[0: 360: step], 0]
edges = np.r_[0, np.r_[step / 2: 360: step], 360]
ds['mdts_binned_idx'] = (ds['mdts'].dims, np.digitize(ds['mdts'], edges))
ds['mdts_binned'] = (ds['mdts'].dims, centers[ds['mdts_binned_idx'] - 1])
# apply stats.mode to get the modal (most common) value in each day
ds2 = xr.Dataset()
ds2['mdts_mode_1d'] = ds['mdts_binned'].resample(time='1D').reduce(func)

How do I construct design matrix in NumPy (for linear regression)?

For this lab I need to sample 150 x-values from a Normal distribution using a mean of 0 and standard deviation of 10, then from the x-values construct a design matrix using the features {1,x,x^2}.
We have to sample parameters and then use the design matrix to create y values for regression data.
The problem is that my design matrix isn't square, and the Moore-Penrose Pseduoinverse needs square matrices, but I don't know how to get that to work given the earlier setup of the lab?
This is what I've done
#Linear Regression Lab
import numpy as np
import math
data = np.random.normal(0, 10, 150)
design_matrix = np.zeros((150,3))
for i in range(150):
design_matrix[i][0] = 1
design_matrix[i][1] = data[i]
design_matrix[i][2] = pow(data[i], 2)
print("-------------------Design Matrix---------------------")
print("|--------1--------|-------x-------|--------x^2--------|")
print(design_matrix[:20])
#sampling paramters
theta_0 = np.random.uniform(low = -30, high = 20)
theta_1 = np.random.uniform(low = -30, high = 20)
theta_2 = np.random.uniform(low = -30, high = 20)
print(theta_0, theta_1, theta_2)
theta = np.array([theta_0, theta_1, theta_2])
theta = np.transpose(theta)
#moore penrose psuedo inverse
MPpi = np.linalg.pinv(design_matrix) ##problem here
y_values = np.linalg.inv(MPpi)
Feel free to edit this incomplete answer
After running this code on Repl, I got the following error message
Traceback (most recent call last):
File "main.py", line 32, in <module>
y_values = np.linalg.inv(MPpi)
File "<__array_function__ internals>", line 5, in inv
File "/home/runner/.local/share/virtualenvs/python3/lib/python3.8/site-packages/numpy/linalg/linalg.py", line 542, in inv
_assert_stacked_square(a)
File "/home/runner/.local/share/virtualenvs/python3/lib/python3.8/site-packages/numpy/linalg/linalg.py", line 213, in _assert_stacked_square
raise LinAlgError('Last 2 dimensions of the array mustbe square')
numpy.linalg.LinAlgError: Last 2 dimensions of the array must be square
The first error propagates from taking the inverse of MPpi
By looking at the docs, it seems that pinv switches the last two dimensions [e.g., an m x n matrix becomes n x m], so we will need to format the matrix before calculating the psuedoinverse
As far as the Moore Penrose inverse AKA pinv is concerned, this article suggests that multiplying MPpi*data, which will yield x_0 {notation from Ross MacAusland}, which is the best fit for your least squares regression.

ValueError: scale < 0 during normalization by using gaussian distribution function

I'm trying to read my text file and extract 3 main parameters and put them in separate list and apply normalizing on lists of parameters which are (Temperature, Speed, Acceleration) after assigning Gaussian distribution function. For getting good result I split up positive and negative numbers of each parameters' list and apply gaussian distribution function and pick mean value of negative numbers as the real Minimum and pick mean value of positive numbers as the real Maximum instead of directly find Min and Max values in main list of these parameters which could repeat few times due to they're not in desired confidence interval. The problem is I faced RunTimeWarning error which I avoided already but still I have below error(s) which I don't have any clue how I can solve them includes ValueError: scale <0 , hope that someone has good idea about solution for errors ot better way to apply normalization by using gaussian distribution function Thanks for your attention:
File "c:\Users\majm\.vscode\extensions\ms-python.python-2018.11.0\pythonFiles\experimental\ptvsd_launcher.py", line 45, in <module>
main(ptvsdArgs)
File "c:\Users\majm\.vscode\extensions\ms-python.python-2018.11.0\pythonFiles\experimental\ptvsd\ptvsd\__main__.py", line 265, in main
wait=args.wait)
File "c:\Users\majm\.vscode\extensions\ms-python.python-2018.11.0\pythonFiles\experimental\ptvsd\ptvsd\__main__.py", line 258, in handle_args
debug_main(addr, name, kind, *extra, **kwargs)
File "c:\Users\majm\.vscode\extensions\ms-python.python-2018.11.0\pythonFiles\experimental\ptvsd\ptvsd\_local.py", line 45, in debug_main
run_file(address, name, *extra, **kwargs)
File "c:\Users\majm\.vscode\extensions\ms-python.python-2018.11.0\pythonFiles\experimental\ptvsd\ptvsd\_local.py", line 79, in run_file
run(argv, addr, **kwargs)
File "c:\Users\majm\.vscode\extensions\ms-python.python-2018.11.0\pythonFiles\experimental\ptvsd\ptvsd\_local.py", line 140, in _run
_pydevd.main()
File "c:\Users\majm\.vscode\extensions\ms-python.python-2018.11.0\pythonFiles\experimental\ptvsd\ptvsd\_vendored\pydevd\pydevd.py", line 1925, in main
debugger.connect(host, port)
File "c:\Users\majm\.vscode\extensions\ms-python.python-2018.11.0\pythonFiles\experimental\ptvsd\ptvsd\_vendored\pydevd\pydevd.py", line 1283, in run
return self._exec(is_module, entry_point_fn, module_name, file, globals, locals)
File "c:\Users\majm\.vscode\extensions\ms-python.python-2018.11.0\pythonFiles\experimental\ptvsd\ptvsd\_vendored\pydevd\pydevd.py", line 1290, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "c:\Users\majm\.vscode\extensions\ms-python.python-2018.11.0\pythonFiles\experimental\ptvsd\ptvsd\_vendored\pydevd\_pydev_imps\_pydev_execfile.py", line 25, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "p:\Desktop\correctt\news.py", line 142, in <module>
plotgaussianfunction(t_p_mean, t_sigma_Positive)
File "p:\Desktop\correctt\news.py", line 58, in plotgaussianfunction
s = np.random.normal(mu, sigma,1000)
File "mtrand.pyx", line 1656, in mtrand.RandomState.normal
ValueError: scale < 0
So my code is:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy
import warnings
warnings.filterwarnings("ignore",category =RuntimeWarning)
df = pd.read_csv('D:/me.txt', header=None)
id_set = df[df.index % 4 == 0].astype('int').values
speed = df[df.index % 4 == 1].values
acceleration = df[df.index % 4 == 2].values
temperature = df[df.index % 4 == 3].values
m_data={'p_Speed': s_p_results[:,0],'n_Speed': s_n_results[:,0], 'p_Acceleration': a_p_results[:,0],'n_Acceleration': a_n_results[:,0], 'p_Temperature': t_p_results[:,0],'n_Temperature': t_n_results[:,0]}
m_main_data = pd.DataFrame(data, columns=['Speed','Acceleration','Temperature'], index = id_set[:,0])
data = {'Speed': speed[:,0], 'Acceleration': acceleration[:,0], 'Temperature': temperature[:,0]}
main_data = pd.DataFrame(data, columns=['Speed','Acceleration','Temperature'], index = id_set[:,0])
main_data = main_data.replace([np.inf, -np.inf], np.nan)
def normalize(value, min_value, max_value, min_norm, max_norm):
new_value = ((max_norm - min_norm)*((value - min_value)/(max_value - min_value))) + min_norm
return new_value
def createpositiveandnegativelist(listtocreate):
l_negative = []
l_positive = []
for value in listtocreate:
if (value < 0):
l_negative.append(value)
elif (value > 0):
l_positive.append(value)
#print(t_negative)
#print(t_positive)
return l_negative,l_positive
def calculatemean(listtocalculate):
return sum(listtocalculate)/len(listtocalculate)
def plotgaussianfunction(mu,sigma):
s = np.random.normal(mu, sigma,1000)
abs(mu - np.mean(s))<0.01
abs(sigma - np.std(s,ddof=1))<0.01
#count, bins, ignored = plt.hist(s,30,density=True)
#plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * np.exp(-(bins-mu)**2/(2*sigma**2)),linewidth=2, color= 'r')
#plt.show()
return
def plotboundedCI(s, mu, sigma, lists):
plt.figure()
count, bins, ignored = plt.hist(s,30,density=True)
plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * np.exp(-(bins-mu)**2/(2*sigma**2)),linewidth=2, color= 'r')
#confidential interval calculation
ci = scipy.stats.norm.interval(0.68, loc = mu, scale = sigma)
#confidence interval for left line
one_x12, one_y12 = [ci[0],ci[0]], [0,3]
#confidence interval for right line
two_x12, two_y12 = [ci[1],ci[1]], [0,3]
plt.title("Gaussian 68% Confidence Interval", fontsize=12, color='black', loc='left', style='italic')
plt.plot(one_x12, one_y12, two_x12, two_y12, marker = 'o')
plt.show()
results = []
for value in lists:
if(ci[0]< value <ci[1]):
results.append(value)
else:
#print("NOT WANTED: ",value)
pass
return results
t_negative, t_positive = createpositiveandnegativelist(temperature)
a_negative, a_positive = createpositiveandnegativelist(acceleration)
s_negative, s_positive = createpositiveandnegativelist(speed)
#calculating the mean value
t_p_mean = calculatemean(t_positive)
a_p_mean = calculatemean(a_positive)
s_p_mean = calculatemean(s_positive)
t_n_mean = calculatemean(t_negative)
a_n_mean = calculatemean(a_negative)
s_n_mean = calculatemean(s_negative)
#calculating the sigma value
t_sigma_Negative = np.std(t_negative)
t_sigma_Positive = np.std(t_positive)
a_sigma_Negative = np.std(t_negative)
a_sigma_Positive = np.std(t_positive)
s_sigma_Negative = np.std(t_negative)
s_sigma_Positive = np.std(t_positive)
#plot the gaussian function with histograms
plotgaussianfunction(t_p_mean, t_sigma_Positive)
plotgaussianfunction(t_n_mean, t_sigma_Negative)
plotgaussianfunction(a_p_mean, a_sigma_Positive)
plotgaussianfunction(a_n_mean, a_sigma_Negative)
plotgaussianfunction(s_p_mean, s_sigma_Positive)
plotgaussianfunction(s_n_mean, s_sigma_Negative)
#normalization
t_p_s = np.random.normal(t_p_mean, t_sigma_Positive,1000)
t_n_s = np.random.normal(t_n_mean, t_sigma_Negative,1000)
a_p_s = np.random.normal(a_p_mean, a_sigma_Positive,1000)
a_n_s = np.random.normal(a_n_mean, a_sigma_Negative,1000)
s_p_s = np.random.normal(s_p_mean, s_sigma_Positive,1000)
s_n_s = np.random.normal(s_n_mean, s_sigma_Negative,1000)
#histograms minus the outliers
t_p_results = plotboundedCI(t_p_s, t_p_mean, t_sigma_Positive, t_positive)
t_n_results = plotboundedCI(t_n_s, t_n_mean, t_sigma_Negative, t_negative)
a_p_results = plotboundedCI(a_p_s, a_p_mean, a_sigma_Positive, a_positive)
a_n_results = plotboundedCI(a_n_s, a_n_mean, a_sigma_Negative, a_negative)
s_p_results = plotboundedCI(s_p_s, s_p_mean, s_sigma_Positive, s_positive)
s_n_results = plotboundedCI(s_n_s, s_n_mean, s_sigma_Negative, s_negative)
Note: I have some missing data(nan or inf) in my list of values which are already replaced by zero! but considering that when I have no missing values in my list of parameters , the code works!
from documentation of numpy.random.normal:
Parameters:
loc : float or array_like of floats
Mean (“centre”) of the distribution.
scale : float or array_like of floats
Standard deviation (spread or “width”) of the distribution.
size : int or tuple of ints, optional
Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. If size is None (default), a single value is returned if loc and scale are both scalars. Otherwise, np.broadcast(loc, scale).size samples are drawn.
the scale is the Standard deviation of the distribution hence it can not be negative. Hence the error you get: ValueError: scale < 0
you may want to check the sign of this parameter. give it a try with:
s = np.random.normal(mu, np.abs(sigma),1000)

Scipy spectrogram loop error

I want to run scipy.signal.spectrogram in a loop with different nperseg, noverlap, and nfft each time. However I got:
TypeError: 'numpy.float64' object cannot be interpreted as an integer
Here is what I wrote:
Fs=10e3
data = testData(Fs)
r = []
for i in numpy.linspace(-0.4, 0.4, 9):
t_step = 0.5+i
f_step = 0.5-i
window_length = round(2 * t_step * Fs)
noverlap = round(t_step * Fs)
nfft = round(Fs / f_step)
arr_f, arr_t, fft = scipy.signal.spectrogram(data, Fs,
nperseg=window_length,
noverlap=noverlap,
nfft=nfft,
window='hanning')
r.append((arr_f, arr_t, fft))
where testData is copied from spectrogram documentation,
Scipy version is 1.1.0.
When I run the same code with constant, hardcoded t_step and f_step (without +/- i) everything is going smoothly in the whole range. So here are my questions:
Why is it not working?
Is there a way not to do it manually?
Full Tracback:
File "/Users/desktop/test.py", line 34, in main window='hanning')
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scipy/signal/spectral.py", line 691, in spectrogram input_length=x.shape[axis])
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scipy/signal/spectral.py", line 1775, in _triage_segments win = get_window(window, nperseg)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site packages/scipy/signal/windows/windows.py", line 2106, in get_window return winfunc(*params)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scipy/signal/windows/windows.py", line 786, in hann return general_hamming(M, 0.5, sym)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scipy/signal/windows/windows.py", line 1016, in general_hamming return general_cosine(M, [alpha, 1. - alpha], sym)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scipy/signal/windows/windows.py", line 116, in general_cosine w = np.zeros(M)
TypeError: 'numpy.float64' object cannot be interpreted as an integer
Your calculations of tstep and fstep produce float numbers, but scipy expects integers. You could for instance change your code to
arr_f, arr_t, fft = signal.spectrogram(data, Fs,
nperseg=window_length.astype(int),
noverlap=noverlap.astype(int),
nfft=nfft.astype(int),
window='hanning')
and scipy should work without problems. The .astype(int) just tweaks the numpy datatype, so instead of the float number 2000.0 scipy receives the integer number 2000. You can find more information about numpy data types in the official documentation.
A better way is of course to change your calculations, so that they produce integers numbers right away.

ValueError when using lmfit LognormalModel

I have been using lmfit for about a day now and needless to say I know very little about the library. I have been using several built-in models for curve fitting and all of them work flawlessly with the data except the Lognormal Model.
Here is my code:
from numpy import *
from lmfit.models import LognormalModel
import pandas as pd
import scipy.integrate as integrate
import matplotlib.pyplot as plt
data = pd.read_csv('./data.csv', delimiter = ",")
x = data.ix[:, 0]
y = data.ix[:, 1]
print (x)
print (y)
mod = LognormalModel()
pars = mod.guess(y, x=x)
out = mod.fit(y, pars , x=x)
print(out.best_values)
print(out.fit_report(min_correl=0.25))
out.plot()
plt.plot(x, y, 'bo')
plt.plot(x, out.init_fit, 'k--')
plt.plot(x, out.best_fit, 'r-')
plt.show()
and the error output is:
Traceback (most recent call last):
File "Cs_curve_fit.py", line 17, in <module>
pvout = pvmod.fit(y, amplitude= 1, center = 1, sigma =1 , x=x)
File "C:\Users\NAME\Anaconda3\lib\site-packages\lmfit\model.py", line 731, in fit
output.fit(data=data, weights=weights)
File "C:\Users\NAME\Anaconda3\lib\site-packages\lmfit\model.py", line 944, in fit
self.init_fit = self.model.eval(params=self.params, **self.userkws)
File "C:\Users\NAME\Anaconda3\lib\site-packages\lmfit\model.py", line 569, in eval
return self.func(**self.make_funcargs(params, kwargs))
File "C:\Users\NAME\Anaconda3\lib\site-packages\lmfit\lineshapes.py", line 162, in lognormal
x[where(x <= 1.e-19)] = 1.e-19
File "C:\Users\NAME\Anaconda3\lib\site-packages\pandas\core\series.py", line 773, in __setitem__
setitem(key, value)
File "C:\Users\NAME\Anaconda3\lib\site-packages\pandas\core\series.py", line 755, in setitem
raise ValueError("Can only tuple-index with a MultiIndex")
ValueError: Can only tuple-index with a MultiIndex
First, the error message you show cannot have come from the code you post. The error message says that line 17 of the file "Cs_curve_fit.py" reads
pvout = pvmod.fit(y, amplitude= 1, center = 1, sigma =1 , x=x)
but that is not anywhere in your code. Please post the actual code and the actual output.
Second, the problem appears to happening because the data for x is cannot be turned into a 1D numpy array. Not being able to trust your code or output, I would just suggest converting the data to 1D numpy arrays yourself as a first test. Lmfit should be able to handle Pandas series, but it just does a simple coercion to 1D numpy arrays.

Categories