Create random database and convert it from numpy to pandas - python

I would like to create random database. in the database I want to create coordinaes so in the ed I can plot it, meaning, each point supppoose to have X and Y coordinate.
I have created data for one set of points but it is in numpy and I want it to be in pandas and I keep getting errors.
this is how I have created it:
#database 1
# defining the mean
mu = 0.5
# defining the standard deviation
sigma = 0.1
# The random module uses the seed value as a base
# to generate a random number. If seed value is not
# present, it takes the system’s current time.
np.random.seed(0)
# define the x co-ordinates
X = np.random.normal(mu, sigma, (395, 1))
# define the y co-ordinates
Y = np.random.normal(mu * 2, sigma * 3, (395, 1))
index=[X,Y]
##here I get all the errors
df = pd.DataFrame({'X': X, 'Y': Y}, index=index)
The errro I recieved:
Exception: Data must be 1-dimensional
I have also tried other methodes to make it dataframe but it didn't work and I believe it is something tiny that i'm missing.
My end goal is to create dataframe from those arrays.

The way you are calling np.random.normal is creating arrays of shape (395, 1). That means that you are creating an array that contains 395 arrays of 1 element.
Example:
array([[0.67640523],
[0.54001572],
[0.5978738 ],
[0.72408932],
[0.6867558 ],
[0.40227221],..])
This is what is breaking the pd.DataFrame call. So, to solve this, you need to pass the shape argument as (395) or simply 395 to create a one dimensional array.
#database 1
# defining the mean
mu = 0.5
# defining the standard deviation
sigma = 0.1
# The random module uses the seed value as a base
# to generate a random number. If seed value is not
# present, it takes the system’s current time.
np.random.seed(0)
# define the x co-ordinates
X = np.random.normal(mu, sigma, (395))
# define the y co-ordinates
Y = np.random.normal(mu * 2, sigma * 3, (395))
index=[X,Y]
##here I get all the errors
df = pd.DataFrame({'X': X, 'Y': Y}, index=index)
I would also suggest you to remove the line index=[X,Y] and the index parameter while calling pd.DataFrame as it doesn't make any sense to me. You are setting as index the same values that you have at X and Y. The final code would be something like this:
#database 1
# defining the mean
mu = 0.5
# defining the standard deviation
sigma = 0.1
# The random module uses the seed value as a base
# to generate a random number. If seed value is not
# present, it takes the system’s current time.
np.random.seed(0)
# define the x co-ordinates
X = np.random.normal(mu, sigma, 395)
print(X.shape)
# define the y co-ordinates
Y = np.random.normal(mu * 2, sigma * 3, 395)
print(Y.shape)
##here I get all the errors
df = pd.DataFrame({'X': X, 'Y': Y})

You should replace
X = np.random.normal(mu, sigma, (395, 1)) with X = np.random.normal(mu, sigma, 395) and Y = np.random.normal(mu * 2, sigma * 3, (395, 1)) with Y = np.random.normal(mu * 2, sigma * 3, 395).
In this way X and Y will be 1-dimensional: in fact let's check array shapes:
np.random.normal(mu, sigma, (395, 1)).shape
(395,1) #Hence this is a 2-dimensional vector
np.random.normal(mu, sigma, 395).shape
(395,) #this is a 1-dimensional vector

Is this the plot you want?
df = pd.DataFrame(list({'X': X, 'Y': Y}.items()))
df.explode(1).apply(lambda x: x[1][0], axis=1).plot()

Related

Interpolate between linear and nonlinear values

I have been able to interpolate values successfully from linear values of x to sine-like values of y.
However - I am struggling to interpolate the other way - from nonlinear values of y to linear values of x.
The below is a toy example
import matplotlib.pylab as plt
from scipy import interpolate
#create 100 x values
x = np.linspace(-np.pi, np.pi, 100)
#create 100 values of y where y= sin(x)
y=np.sin(x)
#learn function to map y from x
f = interpolate.interp1d(x, y)
With new values of linear x
xnew = np.array([-1,1])
I get correctly interpolated values of nonlinear y
ynew = f(xnew)
print(ynew)
array([-0.84114583, 0.84114583])
The problem comes when I try and interpolate values of x from y.
I create a new function, the reverse of f:
f2 = interpolate.interp1d(y,x,kind='cubic')
I put in values of y that I successfully interpolated before
ynew=np.array([-0.84114583, 0.84114583])
I am expecting to get the original values of x [-1, 1]
But I get:
array([-1.57328791, 1.57328791])
I have tried putting in other values for the 'kind' parameter with no luck and am not sure if I have got the wrong approach here. Thanks for your help
I guess the problem raises from the fact, that x is not a function of y, since for an arbitrary y value there may be more than one x value found.
Take a look at a truncated range of data.
When x ranges from 0 to np.pi/2, then for every y value there is a unique x value.
In this case the snippet below works as expected.
>>> import numpy as np
>>> from scipy import interpolate
>>> x = np.linspace(0, np.pi / 2, 100)
>>> y = np.sin(x)
>>> f = interpolate.interp1d(x, y)
>>> f([0, 0.1, 0.3, 0.5])
array([0. , 0.09983071, 0.29551713, 0.47941047])
>>> f2 = interpolate.interp1d(y, x)
>>> f2([0, 0.09983071, 0.29551713, 0.47941047])
array([0. , 0.1 , 0.3 , 0.50000001])
Maxim provided the reason for this behavior. This interpolation is a class designed to work for functions. In your case, y=arcsin(x) is only in a limited interval a function. This leads to interesting phenomena in the interpolation routine that interpolates to the nearest y-value which in the case of the arcsin() function is not necessarily the next value in the x-y curve but maybe several periods away. An illustration:
import numpy as np
import matplotlib.pylab as plt
from scipy import interpolate
xmin=-np.pi
xmax=np.pi
fig, axes = plt.subplots(3, 3, figsize=(15, 10))
for i, fac in enumerate([2, 1, 0.5]):
x = np.linspace(xmin * fac, xmax*fac, 100)
y=np.sin(x)
#x->y
f = interpolate.interp1d(x, y)
x_fit = np.linspace(xmin*fac, xmax*fac, 1000)
y_fit = f(x_fit)
axes[i][0].plot(x_fit, y_fit)
axes[i][0].set_ylabel(f"sin period {fac}")
if not i:
axes[i][0].set_title(label="interpolation x->y")
#y->x
f2 = interpolate.interp1d(y, x)
y2_fit = np.linspace(.99 * min(y), .99 * max(y), 1000)
x2_fit = f2(y2_fit)
axes[i][1].plot(x2_fit, y2_fit)
if not i:
axes[i][1].set_title(label="interpolation y->x")
#y->x with cubic interpolation
f3 = interpolate.interp1d(y, x, kind="cubic")
y3_fit = np.linspace(.99 * min(y), .99 * max(y), 1000)
x3_fit = f3(y3_fit)
axes[i][2].plot(x3_fit, y3_fit)
if not i:
axes[i][2].set_title(label="cubic interpolation y->x")
plt.show()
As you can see, the interpolation works along the ordered list of y-values (as you instructed it to), and this works particularly badly with cubic interpolation.

Python: How do I fit a line to a specific interval of data?

I am trying to fit a line to the 9.0 to 10.0 um regime of my data set. Here is my plot:
Unfortunately, it's a scatter plot with the x values not being indexed from small numbers to large numbers so I can't just apply the optimize.curve_fit function to a specific range of indices to get the desired range in x values.
Below is my go-to procedure for curve fitting. How would I modify it to only get a fit for the 9.0 to 10.0 um x-value range (in my case, the x_dist variable) which has points scattered randomly throughout the indices?
def func(x,a,b): # Define your fitting function
return a*x+b
initialguess = [-14.0, 0.05] # initial guess for the parameters of the function func
fit, covariance = optimize.curve_fit( # call to the fitting routine curve_fit. Returns optimal values of the fit parameters, and their estimated variance
func, # function to fit
x_dist, # data for independant variable
xdiff_norm, # data for dependant variable
initialguess, # initial guess of fit parameters
) # uncertainty in dependant variable
print("linear coefficient:",fit[0],"+-",np.sqrt(covariance[0][0])) #print value and one std deviation of first fit parameter
print("offset coefficient:",fit[1],"+-",np.sqrt(covariance[1][1])) #print value and one std deviation of second fit parameter
print(covariance)
You correctly identified that the problem arises because your x-value data are not ordered. You can address this problem differently. One way is to use Boolean masks to filter out the unwanted values. I tried to be as close as possible to your example:
from matplotlib import pyplot as plt
import numpy as np
from scipy import optimize
#fake data generation
np.random.seed(1234)
arr = np.linspace(0, 15, 100).reshape(2, 50)
arr[1, :] = np.random.random(50)
arr[1, 20:45] += 2 * arr[0, 20:45] -5
rng = np.random.default_rng()
rng.shuffle(arr, axis = 1)
x_dist = arr[0, :]
xdiff_norm = arr[1, :]
def func(x, a, b):
return a * x + b
initialguess = [5, 3]
mask = (x_dist>2.5) & (x_dist<6.6)
fit, covariance = optimize.curve_fit(
func,
x_dist[mask],
xdiff_norm[mask],
initialguess)
plt.scatter(x_dist, xdiff_norm, label="data")
x_fit = np.linspace(x_dist[mask].min(), x_dist[mask].max(), 100)
y_fit = func(x_fit, *fit)
plt.plot(x_fit, y_fit, c="red", label="fit")
plt.legend()
plt.show()
Sample output:
This approach does not modify x_dist and xdiff_norm which might or might not be a good thing for further data evaluation. If you wanted to use a line plot instead of a scatter plot, it might be rather useful to sort your arrays in advance (try a line plot with the above method to see why):
from matplotlib import pyplot as plt
import numpy as np
from scipy import optimize
#fake data generation
np.random.seed(1234)
arr = np.linspace(0, 15, 100).reshape(2, 50)
arr[1, :] = np.random.random(50)
arr[1, 20:45] += 2 * arr[0, 20:45] -5
rng = np.random.default_rng()
rng.shuffle(arr, axis = 1)
x_dist = arr[0, :]
xdiff_norm = arr[1, :]
def func(x, a, b):
return a * x + b
#find indexes of a sorted x_dist array, then sort both arrays based on this index
ind = x_dist.argsort()
x_dist = x_dist[ind]
xdiff_norm = xdiff_norm[ind]
#identify index where linear range starts for normal array indexing
start = np.argmax(x_dist>2.5)
stop = np.argmax(x_dist>6.6)
initialguess = [5, 3]
fit, covariance = optimize.curve_fit(
func,
x_dist[start:stop],
xdiff_norm[start:stop],
initialguess)
plt.plot(x_dist, xdiff_norm, label="data")
x_fit = np.linspace(x_dist[start], x_dist[stop], 100)
y_fit = func(x_fit, *fit)
plt.plot(x_fit, y_fit, c="red", ls="--", label="fit")
plt.legend()
plt.show()
Sample output (unsurprisingly not much different):

Creating a matrix of random data in Python

I am trying to create a matrix in python that is 30 × 10 and has randomly generated numbers inside of it. But my numbers in the matrix have to follow the condition:
Randomly generate 30 data points from the sine function, where each data point (x,y) has the form
x = [x0, x1, x2,..., x10], x ∈ [0, 2π]
y = sin(x) + ε, ε ∈ N(0,0.3)
How might I be able to go about this?
Right now I only have a 1 × 10 matrix
def generate_sin_data():
x = np.random.rand()
y = np.sin(x)
features = [x**0, x**1, x**2, x**3, x**4,x**5, x**6, x**7, x**8, x**9,x**10]
return x,y,features
I'm not 100% certain I follow everything, but we can break it down. Here's how you can generate 30 random numbers between 0 and 2π:
import numpy as np
x = np.random.random(30) * 2*np.pi
Here, x is a 1D array of 30 numbers. Check this with x.shape.
Now if you add a dimension, it's easy to generate a matrix of powers up to 10 using NumPy's broadcasting feature. The question seems to ask for 11 numbers (0 to 10) not 10, so I'll do that:
X = x.reshape(-1, 1) ** np.arange(0, 11)
That reshape effectively turns x into a column vector. Now check X.shape and it's (30, 11), which is what I think you were after. Notice we use a big X for a matrix — this convention will help you keep track of things. Each column of X is the original function raised to a power from 0 to 10. (Note that each column comes from the same set of random numbers — I'm not sure if that's what you want?)
If you want y as a function of x (the vector) then do like so:
ϵ = np.random.random(30) * 0.3
y = np.sin(x) + ϵ
import numpy as np
# 30 random uniform values in [0, 2*pi)
_x = np.random.uniform(0, 2*np.pi, 30)
# matrix of 30x10:
x = np.array([
[v ** i for i in range(10)]
for v in _x
])
# random 30x10 normal noise:
eps = np.random.normal(0, 0.3, [30, 10])
# final result 30x10 matrix:
y = np.sin(x) + eps

Python: Scipy's curve_fit for NxM arrays?

Usually I use Scipy.optimize.curve_fit to fit custom functions to data.
Data in this case was always a 1 dimensional array.
Is there a similiar function for a two dimensional array?
So, for example, I have a 10x10 numpy array. Then I have a function that does some stuff and creates a 10x10 numpy array, and I want to fit the function, so that the resulting 10x10 array has the best fit to the input array.
Maybe an example is better :)
data = pyfits.getdata('data.fits') #fits is an image format, this gives me a NxM numpy array
mod1 = pyfits.getdata('mod1.fits')
mod2 = pyfits.getdata('mod2.fits')
mod3 = pyfits.getdata('mod3.fits')
mod1_1D = numpy.ravel(mod1)
mod2_1D = numpy.ravel(mod2)
mod3_1D = numpy.ravel(mod3)
def dostuff(a,b): #originaly this is a function for 2D arrays
newdata = (mod1_1D*12)+(mod2_1D)**a - mod3_1D/b
return newdata
Now a and b should be fitted, so that newdata is as close as possible to data.
What I got so far:
data1D = numpy.ravel(data)
data_X = numpy.arange(data1D.size)
fit = curve_fit(dostuff,data_X,data1D)
But print fit only gives me
(array([ 1.]), inf)
I do have some nans in the arrays, maybe thats a problem?
The goal is to express the 2D function as a 1D function: g(x, y, ...) --> f(xy, ...)
Converting the coordinate pair (x, y) into a single number xy may seem tricky at first. But it's actually quite simple. Just enumerate all data points and you have a single number that uniquely defines each coordinate pair. The fitted function simply has to reconstruct the original coordinates, do it's calculations and return the result.
Example that fits a 2D linear gradient in a 20x10 image:
import scipy as sp
import numpy as np
import matplotlib.pyplot as plt
n, m = 10, 20
# noisy example data
x = np.arange(m).reshape(1, m)
y = np.arange(n).reshape(n, 1)
z = x + y * 2 + np.random.randn(n, m) * 3
def f(xy, a, b):
i = xy // m # reconstruct y coordinates
j = xy % m # reconstruct x coordinates
out = i * a + j * b
return out
xy = np.arange(z.size) # 0 is the top left pixel and 199 is the top right pixel
res = sp.optimize.curve_fit(f, xy, np.ravel(z))
z_est = f(xy, *res[0])
z_est2d = z_est.reshape(n, m)
plt.subplot(2, 1, 1)
plt.plot(np.ravel(z), label='original')
plt.plot(z_est, label='fitted')
plt.legend()
plt.subplot(2, 2, 3)
plt.imshow(z)
plt.xlabel('original')
plt.subplot(2, 2, 4)
plt.imshow(z_est2d)
plt.xlabel('fitted')
I would recommend using symfit for this, I wrote that to take care of all of the magic for you automatically.
In symfit you would just write the equation pretty much as you would on paper, and then you can run the fit.
I would do something like this:
from symfit import parameters, variables, Fit
# Assuming all this data is in the form of NxM arrays
data = pyfits.getdata('data.fits')
mod1 = pyfits.getdata('mod1.fits')
mod2 = pyfits.getdata('mod2.fits')
mod3 = pyfits.getdata('mod3.fits')
a, b = parameters('a, b')
x, y, z, u = variables('x, y, z, u')
model = {u: (x * 12) + y**a - z / b}
fit = Fit(model, x=mod1, y=mod2, z=mod3, u=data)
fit_result = fit.execute()
print(fit_result)
Unfortunatelly I have not yet included examples of the kind you need in the docs yet, but if you just look at the docs I think you can figure it out in case this doesn't work out of the box.

Generating 3D Gaussian distribution in Python

I want to generate a Gaussian distribution in Python with the x and y dimensions denoting position and the z dimension denoting the magnitude of a certain quantity.
The distribution has a maximum value of 2e6 and a standard deviation sigma=0.025.
In MATLAB I can do this with:
x1 = linspace(-1,1,30);
x2 = linspace(-1,1,30);
mu = [0,0];
Sigma = [.025,.025];
[X1,X2] = meshgrid(x1,x2);
F = mvnpdf([X1(:) X2(:)],mu,Sigma);
F = 314159.153*reshape(F,length(x2),length(x1));
surf(x1,x2,F);
In Python, what I have so far is:
x = np.linspace(-1,1,30)
y = np.linspace(-1,1,30)
mu = (np.median(x),np.median(y))
sigma = (.025,.025)
There is a Numpy function numpy.random.multivariate_normal what can supposedly do the same as MATLAB's mvnpdf, but I am struggling to undestand the documentation. Especially in obtaining the covariance matrix needed by numpy.random.multivariate_normal.
As of scipy 0.14, you can use scipy.stats.multivariate_normal.pdf()
import numpy as np
from scipy.stats import multivariate_normal
x, y = np.mgrid[-1.0:1.0:30j, -1.0:1.0:30j]
# Need an (N, 2) array of (x, y) pairs.
xy = np.column_stack([x.flat, y.flat])
mu = np.array([0.0, 0.0])
sigma = np.array([.025, .025])
covariance = np.diag(sigma**2)
z = multivariate_normal.pdf(xy, mean=mu, cov=covariance)
# Reshape back to a (30, 30) grid.
z = z.reshape(x.shape)
I am working on a scikit called scikit-guess that contains some fast estimation routines for non-linear fits. It has a function skg.ngauss.model (also accessible as skg.ngauss_fit.model or skg.ngauss.ngauss_fit.model) which does exactly what you want. The nice thing is that it's not a PDF, so you set the amplitude out of the box:
import numpy as np
import skg.ngauss
a = 2e6
mu = 0, 0
sigma = 0.025, 0.025
x = y = np.linspace(-1, 1, 31)
cov = np.diag(sigma)**2
X = np.meshgrid(x, y)
data = skg.ngauss.model(X, a, mu, cov, axis=0)
You need to tell it axis=0 because it automatically stacks your arrays for you. To avoid passing in that argument, you could write
X = np.stack(np.meshgrid(x, y), axis=-1)
You can plot the result:
from matplotlib import pyplot as plt
plt.imshow(data)
plt.show()
This is not a very exciting distribution because the spread is so small that you end up with a value of ~2e-5 just one pixel away. You may want to up your sampling space to get any sort of meaningful resolution.
Note: At time of writing, the fitting function (ngauss_fit) is still buggy, but the model has been tested successfully, just not in the scikit.
Disclaimer: In case it wasn't obvious from the above, I am the author of scikit-guess.

Categories