i want to solve the following ode
KT + CT' = Q
to given example Data is my code below
import numpy as np
import scipy as sp
# Solve the following ODE
# K*T + C*T' = Q
# T' = C^-1 ( Q - K * T )
T_start=sp.array([ 151.26, 132.18, 131.64, 146.55, 147.87, 137.87])
K = sp.array([[-0.01761969, 0.02704873, 0.00572222, 0. , 0. ,
0. ],
[ 0.02704873, -0.03546941, 0. , 0. , 0.00513177,
0. ],
[ 0.00572222, 0. , 0.03001858, -0.04752982, 0. ,
0.02030505],
[ 0. , 0. , -0.04752982, 0.0444405 , 0.00308932,
0. ],
[ 0. , 0.00513177, 0. , 0.00308932, 0.02629577,
-0.01793915],
[ 0. , 0. , 0.02030505, 0. , -0.01793915,
0.00084506]])
Q = sp.array([ 1.66342077, 0.16187956, 0.65115035, 0.71274755,2.54614269, 0.13680399])
C_invers = sp.array([[ 3.44827586, 0. , 0. , 0. , 0. ,
-0. ],
[ 0. , 1.5625 , 0. , 0. , 0. ,
-0. ],
[ 0. , 0. , 2.63157895, 0. , 0. ,
-0. ],
[ 0. , 0. , 0. , 2.17391304, 0. ,
-0. ],
[ 0. , 0. , 0. , 0. , 1.63934426,
-0. ],
[ 0. , 0. , 0. , 0. , 0. ,
2.38095238]])
time = np.linspace(0, 20, 10000)
#T_real = sp.array([[ 151.26, 132.18, 131.64, 146.55, 147.87, 137.87]])
def deriv(T, t):
return sp.dot( C_invers, Q - np.dot(K, T) )
T_sol = sp.integrate.odeint(deriv, T_start, time)
i know that the result is
sp.array([ 151.26, 132.18, 131.64, 146.55, 147.87, 137.87])
the solution is "stable" if and only if i use this as the T_start condition
but if i change my start condition for example to
T_start=sp.array([ 0, 0, 0, 0, 0, 0])
it won't converge im getting the following result:
where is my fault? Negative values make no sense for my system :/ Can you help me? thanks ;)
The array
array([ 151.26, 132.18, 131.64, 146.55, 147.87, 137.87])
is the equilibrium of your system (approximately). You can find this by setting the right-hand side of your system of equations to 0, which leads to Teq = inv(K)*Q:
In [9]: Teq = np.linalg.solve(K, Q)
In [10]: Teq
Out[10]:
array([ 151.25960795, 132.17972469, 131.6402527 , 146.55025359,
147.87025015, 137.87029892])
That's why your solution appears to be stable when you use these values for the starting point. The solution is very close to the equilibrium, so it doesn't change much.
Long term, however, the solution will eventually diverge away from Teq, because that equilibrium point is unstable. Your system, T' = inv(C)*(Q - K*T), is linear in T, so you can determine the stability by computing the eigenvalues of the coefficient matrix of T. That is, write T = inv(C)*Q - inv(C)*K*T. The coefficient matrix of T is -inv(C)*K. Here's how you can find the eigenvalues of that matrix:
In [11]: A = -C_invers.dot(K)
In [12]: np.linalg.eigvals(A)
Out[12]:
array([-0.2089754 , 0.12257481, -0.06349952, -0.01489581, 0.00146708,
0.05878143])
The coefficent matrix A has three positive eigenvalues. Those correspond to modes that will grow exponentially in time. That is, the equilibrium is unstable, so the growth that you see is to be expected.
Related
I'm currently trying to develop a function that performs matrix multiplication while expanding a differential equation with odeint in Python and am seeing strange results.
I converted the function:
def f(x, t):
return [
-0.1 * x[0] + 2 * x[1],
-2 * x[0] - 0.1 * x[1]
]
to the below so that I can incorporate different matrices.
I have the below matrix of values and function that takes specific values of that matrix:
from scipy.integrate import odeint
x0_train = [2,0]
dt = 0.01
t = np.arange(0, 1000, dt)
matrix_a = np.array([-0.09999975, 1.999999, -1.999999, -0.09999974])
# Function to run odeint with
def f(x, t, a):
return [
a[0] * x[0] + a[1] * x[1],
a[2] * x[0] - a[3] * x[1]
]
odeint(f, x0_train, t, args=(matrix_a,))
>>> array([[ 2. , 0. ],
[ 1.99760115, -0.03999731],
[ 1.99440529, -0.07997867],
...,
[ 1.69090227, 1.15608741],
[ 1.71199436, 1.12319701],
[ 1.73240339, 1.08985846]])
This seems right, but when I create my own function to perform multiplication/regression, I see the results at the bottom of the array are completely different. I have two sparse arrays that provide the same conditions as matrix_a but with zeros around them.
from sklearn.preprocessing import PolynomialFeatures
new_matrix_a = array([[ 0. , -0.09999975, 1.999999 , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. ],
[ 0. , -1.999999 , -0.09999974, 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. ]])
# New function
def f_new(x, t, parameters):
polynomials = PolynomialFeatures(degree=5)
x = np.array(x).reshape(-1,2)
#x0_train_array_reshape = x0_train_array.reshape(1,2)
polynomial_transform = polynomials.fit(x)
polynomial_features = polynomial_transform.fit_transform(x).T
x_ode = np.matmul(parameters[0],polynomial_features)
y_ode = np.matmul(parameters[1],polynomial_features)
return np.concatenate((x_ode, y_ode), axis=None).tolist()
odeint(f_new, x0_train, t, args=(new_matrix_a,))
>>> array([[ 2.00000000e+00, 0.00000000e+00],
[ 1.99760142e+00, -3.99573216e-02],
[ 1.99440742e+00, -7.98188169e-02],
...,
[-3.50784051e-21, -9.99729456e-22],
[-3.50782881e-21, -9.99726119e-22],
[-3.50781711e-21, -9.99722781e-22]])
As you can see, I'm getting completely different values at the end of the array. I've been running through my code and can't seem to find a reason why they would be different. Does anybody have a clear reason why or if I'm doing something wrong with my f_new? Ideally, I'd like to develop a function that can take any values in that matrix_a, which is why I'm trying to create this new function.
Thanks in advance.
You should perhaps use numpy even more in the first version, to avoid sign errors in routine algorithms.
def f(x, t, a):
return a.reshape([2,2]) # x # or use matmul, or a.reshape([2,2]).dot(x)
or, for efficiency, pass the already reshaped a.
I'm trying to create a population of adjacency matrices with different random weights for which I am using the code underneath. My problem however is that upon running this, all the weights are that of the last iteration of the list comprehension? On printing these adjacencylists during generation, it works fine, however, the output of the getPopulation function is 5 times the same parameter set.
It feels like this would be an easy fix, but something (I think possibly very basic) is missing. Maybe a problem where deep copy is needed or something?
Already tried using normal for-loops, print statements etc.
import networkx as nx
import numpy as np
G = nx.DiGraph()
G.add_nodes_from(["Sadness", "Avoidance", "Guilt"])
G.add_edges_from([("Sadness", "Avoidance")], weight=1)
G.add_edges_from([("Avoidance", "Sadness")], weight=1)
G.add_edges_from([("Avoidance", "Guilt"), ("Guilt", "Sadness")], weight=1)
parameters = nx.to_numpy_matrix(G)
def getRandParamValue(adj):
for p in np.transpose(adj.nonzero()):
adj[p[0], p[1]] = adj[p[0], p[1]] * np.random.uniform()
print(adj)
return adj
def getPopulation(size, initParam):
return [getRandParamValue(initParam) for i in range(size)]
getPopulation(5, parameters)
Upon printing the output in the getRandParamValue function it works fine:
[[0. 0.40218464 0. ]
[0.07330473 0. 0.7196376 ]
[0.53148413 0. 0. ]]
[[0. 0.34256617 0. ]
[0.01773899 0. 0.12460768]
[0.1401687 0. 0. ]]
[[0. 0.11086942 0. ]
[0.01449088 0. 0.04592752]
[0.07903259 0. 0. ]]
[[0. 0.01970867 0. ]
[0.00589168 0. 0.00860802]
[0.06942081 0. 0. ]]
[[0. 0.01045412 0. ]
[0.00084878 0. 0.00713334]
[0.0024654 0. 0. ]]
However, the output of getPopulation isn't identical to the previous output, while this should be expected:
[matrix([[0. , 0.01045412, 0. ],
[0.00084878, 0. , 0.00713334],
[0.0024654 , 0. , 0. ]]),
matrix([[0. , 0.01045412, 0. ],
[0.00084878, 0. , 0.00713334],
[0.0024654 , 0. , 0. ]]),
matrix([[0. , 0.01045412, 0. ],
[0.00084878, 0. , 0.00713334],
[0.0024654 , 0. , 0. ]]),
matrix([[0. , 0.01045412, 0. ],
[0.00084878, 0. , 0.00713334],
[0.0024654 , 0. , 0. ]]),
matrix([[0. , 0.01045412, 0. ],
[0.00084878, 0. , 0.00713334],
[0.0024654 , 0. , 0. ]])]
The parameters matrix is just the following:
[[0. 1. 0.]
[1. 0. 1.]
[1. 0. 0.]]
So the problem is as follows:
def myfunction(L):
L[0] += 1
return L
my_outer_list = [1,2,3]
newlist = myfunction(my_outer_list)
print(newlist)
> [2, 2, 3]
print(my_outer_list)
> [2, 2, 3]
newlist[2]=-1
print(newlist)
> [2, 2, -1]
print(my_outer_list)
> [2, 2, -1]
I've passed the object my_outer_list to the function. Then that object gets modified, and the object is returned. So now newlist and my_outer_list aren't just equal, they are two different names for the very same thing. Things I do to that object change the object, and you see those changes no matter which name you use.
This is what's happened to you. If I had instead had myfunction return L.copy(), it would have returned a copy of L rather than exactly L.
So you should return adj.copy().
I have a 2D list P[30][30] that indicates probabilities. I have set the values of the list as 0 and i want to update them. I have created a function to update the values of the list I want but they still remain 0.
def Prop(graph,i,candidate_nodes,Pr,t,n1):
pp=0
for j in candidate_nodes:
pp+=(t[i][j]*n1[i][j])
for k in candidate_nodes:
Pr[i][k]=(t[i][k]*n1[i][k])/pp
return Pr
The function should work, as far as I can see. If I throw everything out I do not know about, there are changes to a 2d array full of zeros:
def Prop(candidate_nodes, Pr, i):
pp=0
for j in candidate_nodes:
pp+=4*j
for k in candidate_nodes:
Pr[i][k]=i/pp
return Pr
Prop([1,2,3], np.zeros((3,7)), 2)
Out:
array([[0. , 0. , 0. , 0. , 0. , 0. , 0. ],
[0. , 0. , 0. , 0. , 0. , 0. , 0. ],
[0. , 0.08333333, 0.08333333, 0.08333333, 0. , 0. , 0. ]])
I'm very new to GPU programming and pyCUDA and have a pretty fundamental gap in my knowledge. I have spent quite a bit of time searching SO, looking at example code and reading supporting documentation for CUDA/pyCUDA but haven't found much diversity in the explanations and can't get my head around a few things.
I am having trouble correctly defining block and grid dimensions. The code I am currently running is as follows, and aims to do element-wise multiplication of an array a by a float b:
from __future__ import division
import pycuda.gpuarray as gpuarray
import pycuda.driver as cuda
import pycuda.autoinit
from pycuda.compiler import SourceModule
import numpy as np
rows = 256
cols = 10
a = np.ones((rows, cols), dtype=np.float32)
a_gpu = cuda.mem_alloc(a.nbytes)
cuda.memcpy_htod(a_gpu, a)
b = np.float32(2)
mod = SourceModule("""
__global__ void MatMult(float *a, float b)
{
const int i = threadIdx.x + blockDim.x * blockIdx.x;
const int j = threadIdx.y + blockDim.y * blockIdx.y;
int Idx = i + j*gridDim.x;
a[Idx] *= b;
}
""")
func = mod.get_function("MatMult")
xBlock = np.int32(np.floor(1024/rows))
yBlock = np.int32(cols)
bdim = (xBlock, yBlock, 1)
dx, mx = divmod(rows, bdim[0])
dy, my = divmod(cols, bdim[1])
gdim = ( (dx + (mx>0)) * bdim[0], (dy + (my>0)) * bdim[1])
print "bdim=",bdim, ", gdim=", gdim
func(a_gpu, b, block=bdim, grid=gdim)
a_doubled = np.empty_like(a)
cuda.memcpy_dtoh(a_doubled, a_gpu)
print a_doubled - 2*a
The code should print the block dimensions bdim and the grid dimensions gdim, as well as an array of zeroes.
This works for small array sizes, for example, if rows=256 and cols=10 (as in the example above) the output is as follows:
bdim= (4, 10, 1) , gdim= (256, 10)
[[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]]
However, if I increase rows=512, I get the following output:
bdim= (2, 10, 1) , gdim= (512, 10)
[[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 2. 2. 2. ..., 2. 2. 2.]
[ 2. 2. 2. ..., 2. 2. 2.]
[ 2. 2. 2. ..., 2. 2. 2.]]
Indicating that the multiplication is happening twice for some elements of the array.
However, if I force the block dimensions to bdim = (1,1,1), the problem no longer occurs and I get the following (correct) output for the larger array size:
bdim= (1, 1, 1) , gdim= (512, 10)
[[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]]
I don't understand this. What is happening here which means that this method of defining the block and grid dimensions is no longer appropriate as the array size is increased? Also, if block has dimensions (1,1,1) does this mean that the calculation is being performed serially?
Thanks in advance for any pointers and help!
You operate on 2D grid of 2D blocks. In your kernel you seem to assume that gridDim.x would return number of threads in x dimension of a grid.
__global__ void MatMult(float *a, float b)
{
const int i = threadIdx.x + blockDim.x * blockIdx.x;
const int j = threadIdx.y + blockDim.y * blockIdx.y;
int Idx = i + j*gridDim.x;
a[Idx] *= b;
}
The gridDim.x returns number of blocks r x direction of grid, not number of threads. In order to obtain number of threads in given direction you should multiply number of threads in a block with number of blocks in a grid in the same direction:
int Idx = i + j * blockDim.x * gridDim.x
I am trying to implement Non-negative Matrix Factorization so as to find the missing values of a matrix for a Recommendation Engine Project. I am using the nimfa library to implement matrix factorization. But can't seem to figure out how to predict the missing values.
The missing values in this matrix is represented by 0.
a=[[ 1. 0.45643546 0. 0.1 0.10327956 0.0225877 ]
[ 0.15214515 1. 0.04811252 0.07607258 0.23570226 0.38271325]
[ 0. 0.14433757 1. 0.07905694 0. 0.42857143]
[ 0.1 0.22821773 0.07905694 1. 0. 0.27105237]
[ 0.06885304 0.47140452 0. 0. 1. 0.13608276]
[ 0.00903508 0.4592559 0.17142857 0.10842095 0.08164966 1. ]]
import nimfa
model = nimfa.Lsnmf(a, max_iter=100000,rank =4)
#fit the model
fit = model()
#get U and V matrices from fit
U = fit.basis()
V = fit.coef()
print numpy.dot(U,V)
But the ans given is nearly same as a and I can't predict the zero values.
Please tell me which method to use or any other implementations possible and any possible resources.
I want to use this function to minimize the error in predicting the values.
error=|| a - UV ||_F + c*||U||_F + c*||V||_F
where _F denotes the frobenius norm
I have not used nimfa before so I cannot answer on exactly how to do that, but with sklearn you can perform a preprocessor to transform the missing values, like this:
In [28]: import numpy as np
In [29]: from sklearn.preprocessing import Imputer
# prepare a numpy array
In [30]: a = np.array(a)
In [31]: a
Out[31]:
array([[ 1. , 0.45643546, 0. , 0.1 , 0.10327956,
0.0225877 ],
[ 0.15214515, 1. , 0.04811252, 0.07607258, 0.23570226,
0.38271325],
[ 0. , 0.14433757, 1. , 0.07905694, 0. ,
0.42857143],
[ 0.1 , 0.22821773, 0.07905694, 1. , 0. ,
0.27105237],
[ 0.06885304, 0.47140452, 0. , 0. , 1. ,
0.13608276],
[ 0.00903508, 0.4592559 , 0.17142857, 0.10842095, 0.08164966,
1. ]])
In [32]: pre = Imputer(missing_values=0, strategy='mean')
# transform missing_values as "0" using mean strategy
In [33]: pre.fit_transform(a)
Out[33]:
array([[ 1. , 0.45643546, 0.32464951, 0.1 , 0.10327956,
0.0225877 ],
[ 0.15214515, 1. , 0.04811252, 0.07607258, 0.23570226,
0.38271325],
[ 0.26600665, 0.14433757, 1. , 0.07905694, 0.35515787,
0.42857143],
[ 0.1 , 0.22821773, 0.07905694, 1. , 0.35515787,
0.27105237],
[ 0.06885304, 0.47140452, 0.32464951, 0.27271009, 1. ,
0.13608276],
[ 0.00903508, 0.4592559 , 0.17142857, 0.10842095, 0.08164966,
1. ]])
You can read more here.