Need help to convert the following R code to Python - python

I need help in converting the following R code in Python. Particularly with the matrix function from R (variable W), I find it difficult to convert it to Python as my only idea would be to use np.random.uniform() but don't know whether that works. Can anyone help me? Thanks!!
set.seed(1)
n = 100;
p = 400;
Z= runif(n)-1/2;
W = matrix(runif(n*p)-1/2, n, p);
beta = 1/seq(1:p)^2; # approximately sparse beta
#beta = rnorm(p)*.2 # dense beta
gX = exp(4*Z)+ W%*%beta; # leading term nonlinear
X = cbind(Z, Z^2, Z^3, W ); # polynomials in Zs will be approximating exp(4*Z)
Y = gX + rnorm(n); #generate Y
plot(gX,Y, xlab="g(X)", ylab="Y") #plot V vs g(X)
print( c("theoretical R2:", var(gX)/var(Y)))
var(gX)/var(Y); #theoretical R-square in the simulation example

Something like this?
import numpy as np
from matplotlib import pyplot as plt
n,p = 100,400
Z,W = np.random.rand(n)-1/2, np.random.rand(n,p)-1/2
beta =np.ones(p)/np.arange(1,1+p)**2
gX = np.exp(4*Z) + np.matmul(W,beta)
Y = gX + np.random.rand(n)
plt.scatter(gX,Y); plt.xlabel("g(X)"); plt.ylabel("Y");
gX.var()/Y.var()

Related

How do I convert the x and y values in polar form from these coupled ODEs to to cartesian form and graph them?

I have written this code to model the motion of a spring pendulum
import numpy as np
from scipy.integrate import odeint
from numpy import sin, cos, pi, array
import matplotlib.pyplot as plt
def deriv(z, t):
x, y, dxdt, dydt = z
dx2dt2=(0.415+x)*(dydt)**2-50/1.006*x+9.81*cos(y)
dy2dt2=(-9.81*1.006*sin(y)-2*(dxdt)*(dydt))/(0.415+x)
return np.array([x,y, dx2dt2, dy2dt2])
init = array([0,pi/18,0,0])
time = np.linspace(0.0,10.0,1000)
sol = odeint(deriv,init,time)
def plot(h,t):
n,u,x,y=h
n=(0.4+x)*sin(y)
u=(0.4+x)*cos(y)
return np.array([n,u,x,y])
init2 = array([0.069459271,0.393923101,0,pi/18])
time2 = np.linspace(0.0,10.0,1000)
sol2 = odeint(plot,init2,time2)
plt.xlabel("x")
plt.ylabel("y")
plt.plot(sol2[:,0], sol2[:, 1], label = 'hi')
plt.legend()
plt.show()
where x and y are two variables, and I'm trying to convert x and y to the polar coordinates n (x-axis) and u (y-axis) and then graph n and u on a graph where n is on the x-axis and u is on the y-axis. However, when I graph the code above it gives me:
Instead, I should be getting an image somewhat similar to this:
The first part of the code - from "def deriv(z,t): to sol:odeint(deriv..." is where the values of x and y are generated, and using that I can then turn them into rectangular coordinates and graph them. How do I change my code to do this? I'm new to Python, so I might not understand some of the terminology. Thank you!
The first solution should give you the expected result, but there is a mistake in the implementation of the ode.
The function you pass to odeint should return an array containing the solutions of a 1st-order differential equations system.
In your case what you are solving is
While instead you should be solving
In order to do so change your code to this
import numpy as np
from scipy.integrate import odeint
from numpy import sin, cos, pi, array
import matplotlib.pyplot as plt
def deriv(z, t):
x, y, dxdt, dydt = z
dx2dt2 = (0.415 + x) * (dydt)**2 - 50 / 1.006 * x + 9.81 * cos(y)
dy2dt2 = (-9.81 * 1.006 * sin(y) - 2 * (dxdt) * (dydt)) / (0.415 + x)
return np.array([dxdt, dydt, dx2dt2, dy2dt2])
init = array([0, pi / 18, 0, 0])
time = np.linspace(0.0, 10.0, 1000)
sol = odeint(deriv, init, time)
plt.plot(sol[:, 0], sol[:, 1], label='hi')
plt.show()
The second part of the code looks like you are trying to do a change of coordinate.
I'm not sure why you try to solve the ode again instead of just doing this.
x = sol[:,0]
y = sol[:,1]
def plot(h):
x, y = h
n = (0.4 + x) * sin(y)
u = (0.4 + x) * cos(y)
return np.array([n, u])
n,u = plot( (x,y))
As of now, what you are doing there is solving this system:
Which leads to x=e^t and y=e^t and n' = (0.4 + e^t) * sin(e^t) u' = (0.4 + e^t) * cos(e^t).
Without going too much into the details, with some intuition you could see that this will lead to an attractor as the derivative of n and u will start to switch sign faster and with greater magnitude at an exponential rate, leading to n and u collapsing onto an attractor as shown by your plot.
If you are actually trying to solve another differential equation I would need to see it in order to help you further
This is what happen if you do the transformation and set the time to 1000:

How to integrate coupled differential equations?

I've got a system of equations that I've been tryin to get Python to solve and plot but the plot is not coming out right.
This is my code:
from scipy.integrate import odeint
import numpy as np
import matplotlib.pyplot as plt
#function that returns dx/dt and dy/dt
def func(z,t):
for r in range(-10,10):
beta=2
gamma=0.8
c = z[0]
tau = z[1]
dcdt = r*c+c**2-c**3-beta*c*tau**2
dtaudt = -gamma*tau+0.5*beta*c*tau
return [dcdt,dtaudt]
#inital conditions
z0 = [2,0]
#time points
t = np.linspace(0,24,100)
#solve ODE
z = odeint(func,z0,t)
#seperating answers out
c = z[:,0]
tau = z[:,1]
print(z)
#plot results
plt.plot(t,c,'r-')
plt.plot(t,tau,'b--')
plt.legend(['c(t)','tau(t)'])
plt.show()
Let me explain. I am studying doubly diffusive convection. I din't want any assumptions to be made on the value of r, but beta and gamma are positive. So I thougt to assign values to them but not to r.
This is the plot I get and from understanding the problem, that the graph is not right. The tau plot should efinitely not be stuck on 0 and the c plot should be doing more. I am relitively new to Python and am taking courses but really want to understand what I've done wrong, so help in a simple language would be appreciated.
I see 2 problems in your function that you should check.
for r in range(-10,10):
Here you are doing a for loop just reevaluating dcdt and dtaudt. As a result, the output value is the same as just evaluating r=9 (last value in the loop)
dtaudt = -gamma*tau+0.5*beta*c*tau
Here you have dtaudt = tau*(beta*c/2. -gamma). Your choice tau[0]=0 implies that tau will remain 0.
Try this:
from scipy.integrate import odeint
import numpy as np
import matplotlib.pyplot as plt
r = 1
beta=2
gamma=0.8
#function that returns dx/dt and dy/dt
def func(z,t):
c = z[0]
tau = z[1]
dcdt = r*c+c**2-c**3-beta*c*tau**2
dtaudt = -gamma*tau+0.5*beta*c*tau
print(dtaudt)
return [dcdt,dtaudt]
#inital conditions
z0 = [2,0.2] #tau[0] =!0.0
#time points
t = np.linspace(0,24,100)
#solve ODE
z = odeint(func,z0,t)
#seperating answers out
c = z[:,0]
tau = z[:,1]
#plot results
plt.plot(t,c,'r-')
plt.plot(t,tau,'b--')
plt.legend(['c(t)','tau(t)'])
plt.show()

Plotting mathematical function in python

i'm trying to write function called plotting which takes i/p parameters Z, p and q and plots the function
f(y) = det(Z − yI) on the interval [p, q]
(Note: I is the identity matrix.) det() is the determinant.
For finding det(), numpy.linalg.det() can be used
and for indentity matrix , np.matlib.identity(n)
Is there a way to write such functions in python? and plot them?
import numpy as np
def f(y):
I2 = np.matlib.identity(y)
x = Z-yI2
numpy.linalg.det(x)
....
Is what i am tryin correct? any alternative?
You could use the following implementation.
import numpy as np
import matplotlib.pyplot as plt
def f(y, Z):
n, m = Z.shape
assert(n==m)
I = np.identity(n)
x = Z-y*I
return np.linalg.det(x)
Z = np.matrix('1 2; 3 4')
p = -15
q = 15
y = np.linspace(p, q)
w = np.zeros(y.shape)
for i in range(len(y)):
w[i] = f(y[i], Z)
plt.plot(y, w)
plt.show()

Issues Translating Custom Discrete Fourier Transform from MATLAB to Python

I'm developing Python software for someone and they specifically requested that I use their DFT function, written in MATLAB, in my program. My translation is just plain not working, tested with sin(2*pi*r).
The MATLAB function below:
function X=dft(t,x,f)
% Compute DFT (Discrete Fourier Transform) at frequencies given
% in f, given samples x taken at times t:
% X(f) = sum { x(k) * e**(2*pi*j*t(k)*f) }
% k
shape = size(f);
t = t(:); % Format 't' into a column vector
x = x(:); % Format 'x' into a column vector
f = f(:); % Format 'f' into a column vector
W = exp(-2*pi*j * f*t');
X = W * x;
X = reshape(X,shape);
And my Python interpretation:
def dft(t, x, f):
i = 1j #might not have to set it to a variable but better safe than sorry!
w1 = f * t
w2 = -2 * math.pi * i
W = exp(w1 * w2)
newArr = W * x
return newArr
Why am I having issues? The MATLAB code works fine but the Python translation outputs a weird increasing sine curve instead of a Fourier transform. I get the feeling Python is handling the calculations slightly differently but I don't know why or how to fix this.
Here's your MATLAB code -
t = 0:0.005:10-0.005;
x = sin(2*pi*t);
f = 30*(rand(size(t))+0.225);
shape = size(f);
t = t(:); % Format 't' into a column vector
x = x(:); % Format 'x' into a column vector
f = f(:); % Format 'f' into a column vector
W = exp(-2*pi*1j * f*t'); %//'
X = W * x;
X = reshape(X,shape);
figure,plot(f,X,'ro')
And here's one version of numpy ported code might look like -
import numpy as np
from numpy import math
import matplotlib.pyplot as plt
t = np.arange(0, 10, 0.005)
x = np.sin(2*np.pi*t)
f = 30*(np.random.rand(t.size)+0.225)
N = t.size
i = 1j
W = np.exp((-2 * math.pi * i)*np.dot(f.reshape(N,1),t.reshape(1,N)))
X = np.dot(W,x.reshape(N,1))
out = X.reshape(f.shape).T
plt.plot(f, out, 'ro')
MATLAB Plot -
Numpy Plot -
Numpy arrays do element wise multiplication with *.
You need np.dot(w1,w2) for matrix multiplication using numpy arrays (not the case for numpy matrices)
Make sure you are clear on the distinction between Numpy arrays and matrices. There is a good help page "Numpy for Matlab Users":
http://wiki.scipy.org/NumPy_for_Matlab_Users
Doesn't appear to be working at present so here is a temporary link.
Also, use t.T to transpose a numpy array called t.

How can I implement bivariate normal Gaussian noise?

I want to implement complex standard Gaussian noise in python or C. This figure shows what I want to implement.
And first I implement it in python, like this.
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import pylab as pl
size = 100000
BIN = 70
x = np.random.normal(0.0,1.0,size)
y = np.random.normal(0.0,1.0,size)
xhist = pl.hist(x,bins = BIN,range=(-3.5,3.5),normed = True)
yhist = pl.hist(y,bins = BIN,range=(-3.5,3.5),normed = True)
xmesh = np.arange(-3.5,3.5,0.1)
ymesh = np.arange(-3.5,3.5,0.1)
Z = np.zeros((BIN,BIN))
for i in range(BIN):
for j in range(BIN):
Z[i][j] = xhist[0][i] + yhist[0][j]
X,Y = np.meshgrid(xmesh,ymesh)
fig = plt.figure()
ax = Axes3D(fig)
ax.plot_wireframe(X,Y,Z)
plt.show()
However, it is not standard complex Gaussian noise.
The output figure become:
I think Gaussian noises are additive, however, why it become so different?
I already tried to change the parts of code
x = np.random.normal(0.0,1.0,size)
y = np.random.normal(0.0,1.0,size)
to
r = np.random.normal(0.0,1.0,size)
theta = np.random.uniform(0.0,2*np.pi,size)
x = r * np.cos(theta)
y = r * np.sin(theta)
however, the result was same.
Please tell me the correct implementation or equation of bivariate standard Gaussian noise.
So sorry.It's my mistake.
Joint probability is defined by the product, not summation. I was a perfect fool!
So
Z[i][j] = xhist[0][i] + yhist[0][j]
term must become
Z[i][j] = xhist[0][i] * yhist[0][j]
And I checked
for i in range(BIN):
for j in range(BIN):
integral = integral + Z[i][j] * 0.01
will be 1.0.
So if we need complex standard Gaussian noise, we should do adding the real standard Gaussian noise to real part and imaginary part.
This is the graph for comparing.

Categories