Since numpy.linalg.svd() is a predefined function i didn't find the inner code of it.
from scipy import linalg
u, s, v = np.linalg.svd(b, full_matrices=True)
import inspect
from scipy import linalg
import numpy as np
print(inspect.getsource(np.linalg.svd))
Related
I have wrote this code to solve an equation , I know the behavior of this function has very rapid oscillations, when I RUN it gives bogus values for some "m[x]" and some "t"'s, with this error:
C:\Users\dani\anaconda3\lib\site-packages\scipy\integrate\odepack.py:247: ODEintWarning: Excess work done on this call (perhaps wrong Dfun type). Run with full_output = 1 to get quantitative information.
warnings.warn(warning_msg, ODEintWarning)
I don't know what is the problem.
how can I get correct results? or at least as accurate as possible? or maybe I should rewrite the code in another form?
thank you.
import scipy as sio
import numpy as np
import mpmath as mp
import scipy.integrate as spi
import matplotlib.pyplot as plt
import time
from scipy.integrate import quad
initial_value=np.logspace(24,27,100)
t=np.logspace(-20,6,100)
m=np.logspace(0,6,100)
start_time=time.perf_counter()
phi_m={}
phi_m_prime={}
phi=[]
phi_prime=[]
j=0
i=np.pi*2.435*initial_value[0]
while i<(np.pi*(2.435*10**(27))):
i=np.pi*2.435*initial_value[j]
phi=[]
phi_prime=[]
for x in range (len(m)):
def dzdt(z,T):
return [z[1], -3*1.4441*(10**(-6))*m[x]*np.sqrt(0.69)*(mp.coth(1.5*np.sqrt(0.69)*(10**(-6))*1.4441*m[x]*T))*z[1] - z[0]]
z0 = [i,0]
ts = t/m[x]
zs = spi.odeint(dzdt, z0, ts)
phi.append(zs[99,0])
phi_prime.append(zs[99,1])
phi_m[j]=phi
phi_m_prime[j]=phi_prime
j+=1
end_time=time.perf_counter()
print(end_time-start_time,"seconds")
I am trying to learn a bit of signal processing , specifically using Python. Here's a sample code I wrote.
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import deconvolve
a = np.linspace(-1,1,50)
b = np.linspace(-1,1,50)**2
c = np.convolve(a,b,mode='same')
quotient,remainder = deconvolve(c,b);
plt.plot(a/max(a),"g")
plt.plot(b/max(b),"r")
plt.plot(c/max(c),"b")
plt.plot(remainder/max(remainder),"k")
#plt.plot(quotient/max(quotient),"k")
plt.legend(['a_original','b_original','convolution_a_b','deconvolution_a_b'])
In my understanding, the deconvolution of the convoluted array should return exactly the same array 'a' since I am using 'b' as the filter. This is clearly not the case as seen from the plots below.
I am not really sure if my mathematical understanding of deconvolution is wrong or if there is something wrong with the code. Any help is greatly appreciated!
You are using mode='same', and this seems not to be compatible with scipy deconvolve. Try with mode='full', it should work much better.
Here the corrected code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import deconvolve
a = np.linspace(-1,1,50)
b = np.linspace(-1,1,50)**2
c = np.convolve(a,b,mode='full')
quotient,remainder = deconvolve(c,b)
plt.plot(a,"g")
plt.plot(b,"r")
plt.plot(c,"b")
plt.plot(quotient,"k")
plt.xlim(0,50)
plt.ylim(-6,2)
plt.legend(['a_original','b_original','convolution_a_b','deconvolution_c_b'])
I am using python script for people detection.
I have the following line in my script:
import time
import cv2 as cv
import glob
import argparse
import sys
import numpy as np
import os.path
from imutils.video import FPS
from collections import deque
from sklearn.utils.linear_assignment_ import linear_assignment
When I run my script I have got the following lines:
/home/user/.local/lib/python3.6/site-packages/sklearn/utils/linear_assignment_.py:127:
DeprecationWarning: The linear_assignment function is deprecated in 0.21 and will be removed from 0.23. Use scipy.optimize.linear_sum_assignment instead.
DeprecationWarning)
Please, advice me how to solve it.
You need to replace the sklearn.utils.linear_assignment_.linear_assignment function by the scipy.optimize.linear_sum_assignment function.
The difference is in the return format: linear_assignment() is returning a numpy array and linear_sum_assignment() a tuple of numpy arrays. You obtain the same output by converting the output of linear_sum_assignment() in array and transpose it.
Your script should look like this:
import time
import cv2 as cv
import glob
import argparse
import sys
import numpy as np
import os.path
from imutils.video import FPS
from collections import deque
from scipy.optimize import linear_sum_assignment
#compute your cost matrix
indices = linear_sum_assignment(cost_matrix)
indices = np.asarray(indices)
indices = np.transpose(indices)
Replace the linear_assignment for linear_sum_assignment
# from sklearn.utils.linear_assignment_ import linear_assignment
from scipy.optimize import linear_sum_assignment
cost = np.array([[4, 1, 3], [2, 0, 5], [3, 2, 2]])
# result = linear_assignment(cost)
result = linear_sum_assignment(cost)
result = np.array(list(zip(*result)))
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.linear_sum_assignment.html
I am scratching my head over this very simple problem. Given this toy data:
randgen = np.random.RandomState(9)
npoints = 1000
noise = randgen.randn(npoints)
x = np.linspace(0, 1, npoints)
y = 5 + 10*x + noise
Solving this using numpy's least squares:
# design matrix::
X = np.ones((npoints, 2))
X[:,0] = np.copy(x)
p, res, rnk, s = np.linalg.lstsq(X, y)
p
gives a reasonable answer: array([ 9.94406755, 5.05954009]) for p. However, solving using scipy's least squares gives wildly different answer (which changes on each invocation of the function):
p, res, rnk, s = scipy.linalg.lstsq(X, y)
p
An example solution is array([ 1.16328381e+08, -2.26560583e+06]). I don't understand what I am missing. I encountered this problem when using Scikit-learn's LinearRegression which internally uses scipy's lstsq. That was giving me weird answers.
Edit:
Numpy version: 1.11.2
Scipy version: 0.18.1
Python: 3.5
Edit 2:
I have realized that loading a particular library before loading scipy is causing this problem. The following order of loading libraries causes problem:
import numpy as np
from numpy.ma import MaskedArray
from matplotlib import pyplot as plt
from netCDF4 import Dataset
import matplotlib as mpl
from mpl_toolkits.basemap import Basemap
from pyeemd import ceemdan
from scipy.sparse.linalg import svds
from sklearn.utils.extmath import svd_flip
from matplotlib.colors import BoundaryNorm
from matplotlib.ticker import MaxNLocator
from scipy.signal import convolve, boxcar
If I removed the from pyeemd import ceemdan line then the problem is solved! This raises the following question: why could this be happening?
I'm trying to plot a simple signal in python, and when i run this it doesn't show any error only 'Restart' and a blank space
from pymatlab import*
import numpy as np
from numpy import sqrt
import matplotlib.pyplot as plt
import scipy as sp
import math
(hashtags) n, coef, freq, phase
def sinyal(N,c,f,p):
y=np.zeros(N)
t=np.linspace(0,2*pi,N)
Nf=len(c)
for i in range(Nf):
y+=c[i]*np.sin(f[i]*t)
return y;
# Signal Generator
c=[2,5,10]
f=[50, 150, 300]
p=[0,0]
N=2000
x=np.linspace(0,2.0*math.pi,N)
y=sinyal(N,c,f,p)
plt.plot(x[:100],y[:100])
plt.show()
The code you posted has a logical indentation error. The call to sinyal is indented one level, placing it inside the definition of sinyal itself. So although sinyal gets defined, it never gets called.
Using 4 spaces for indentation may help you avoid this error in the future.
Your code basically works (apart from some formatting errors and other oddities). I don't have pymatlab but it isn't necessary for this.
import numpy as np
from numpy import sqrt
import matplotlib.pyplot as plt
import scipy as sp
import math
def sinyal(N,c,f,p):
y=np.zeros(N)
t=np.linspace(0,2*np.pi,N)
Nf=len(c)
for i in range(Nf):
y+=c[i]*np.sin(f[i]*t)
return y;
# Signal Generator
c=[2,5,10]
f=[50, 150, 300]
p=[0,0]
N=2000
x=np.linspace(0,2.0*math.pi,N)
y=sinyal(N,c,f,p)
plt.plot(x[:100],y[:100])
plt.show()