I'm applying Proportional-Derivative controller to control my model in ROS. I'm limited to python 2.7.17 version.
There are two types of errors in this script; position error(ep) and heading error(eth).
I've given last_error=0 and trying to get the updation in (ep_dot) and (eth_dot) as a method to find derivative of error.
I wonder whether my given formula for derivative(ep_dot) and (eth_dot) is correct or not.Is this correct way of finding derivative ? Is there any other relevant approach to find the same?
Kindly let me know the solution.
#!/usr/bin/env python
import rospy
import math
import csv
import time
from time import gmtime,strftime
from datetime import datetime
import numpy as np # for converting radians to degrees
from geometry_msgs.msg import Twist #to obtain command velocities
from nav_msgs.msg import Odometry #to obtain position and orientation
from tf.transformations import euler_from_quaternion, quaternion_from_euler
roll = pitch = yaw = 0.0
t_start = time.time()
**last_error =0**
def get_rotation(msg):
velo_msg = Twist()
global roll,pitch,yaw
orientation_q = msg.pose.pose.orientation
orientation_list = [orientation_q.x, orientation_q.y, orientation_q.z, orientation_q.w]
(roll, pitch, yaw) = euler_from_quaternion(orientation_list)
kpp = 0.74
kpth = 0.5
kd = 0.1
Tmax = 60
t_milli = (time.time() - t_start)*1000
t = t_milli/1000 # to get the values in seconds
print("t=",t)
if t > Tmax:
rospy.signal_shutdown("Simulation ends here!")
x_now = msg.pose.pose.position.x
y_now = msg.pose.pose.position.y
th = yaw
xT = math.cos(2*math.pi*t/Tmax)*0.8
yT = math.sin(2*math.pi*t/Tmax)*0.8
#Trasformation matrix # Finding INVERSE
I = np.array([[math.cos(th), -math.sin(th), x_now],[math.sin(th), math.cos(th), y_now],[0, 0, 1]])
B = np.array([[xT],[yT],[1]]) # print('B=',B)
C = np.dot(np.linalg.inv(I),B) # np.dot: for matrix multiplication
xTB = C[0][0] # [] indexing to extract the values of an array/matrix
yTB = C[1][0]
ep = math.sqrt(xTB*xTB + yTB*yTB) # error calc.
ep_dot = (ep-last_error)/t
eth = math.atan2(yTB,xTB)
eth_dot = (eth-last_error)/t
print('ep =',ep,'eth(deg)=',eth*180/math.pi,'radius=',math.sqrt(x_now*x_now + y_now*y_now),'t=',t)
PID_lin = ep*kpp + ep_dot*kd
PID_ang = eth*kpth - eth_dot*kd
# publishing the cmd_vel in linear and angular motion both
velo_msg.linear.x = PID_lin
velo_msg.angular.z = PID_ang
pub.publish(velo_msg)
rospy.init_node('shadan')
sub = rospy.Subscriber("/odom", Odometry, get_rotation)
pub = rospy.Publisher('/cmd_vel',Twist,queue_size=10)
r = rospy.Rate(10)
while not rospy.is_shutdown():
r.sleep()
I think there is an issue with your calculation of the derivative terms.
You get t_start=time.time() at the very beginning of your code and then every time you get in your callback you update t_milli = (time.time() - t_start)*1000 and t = t_milli/1000 with t_start being a constant.
Then you calculate ep_dot as being equals to ep_dot = (ep-last_error)/t.
But ep_dot should be equal to something like ep_dot = (actual_value - last_value)/(t_actual - t_last_value)
The derivative term should be equal to the difference of two consecutive values divided by the difference of time between those two values so you need to store the time of the previous data acquired in the callback in a new variable.
Also, the calculation of the proportional error should be equal to your desired value (goal) minus the actual value. Then you multiply that by your P coefficient.
Hope it fixes your problem !
I'm trying to get the 3D Fourier Transform of the gaussian function e^(-r^(2)/2) in python using the numpy.fft library.
I've attempted using different ffts from the library with different inputs, shifting the results with np.fft.fftshift, trying to find a multiplicative factor and many other things, the last thing I tried was using the 1D fft function, and then cubing the result, here's the corresponding source code:
import numpy as np
R = float(10)
N = float(100)
y= np.dtype(np.float64)
dr = R/N
def F(x):
return np.exp(-((x*dr)**2)/2)
Frange = np.arange(1,int(N)+1)
y = np.zeros((int(N)))
i = 0
while i<int(N):
y[i] = F(Frange[i])
i += 1
y = y/3
y_fft = np.fft.fftshift(np.abs(np.fft.fft(y)))**3
print (y_fft)
The first values I get:
4.62e-03, 4.63e-03, 4.65e-03, 4.69e-03, 4.74e-03
According to Lado, Fred. (1971) Numerical Fourier transforms in one, two, and three dimensions for liquid state calculations, the analytic solution to the problem is: (2pi )^(3/2)*e^(-k^(2)/2)
And the first values of the analytic solution with the same values of R and N are:
14.99, 12.92, 10.10, 7.15, 4.58
I also created a DFT program using a formula provided in the previous article which gives the expected results, but I haven't been able to replicate the analytic results in any of my attempts using the NumPy or SciPy fft libraries.
Here's my program for the analytic and DFT results:
import math
import numpy as np
def F(r):
x=math.exp((-1/2)*(r**2))
return x
def FT(r):
x=((2*math.pi)**(3/2))*(math.exp((-1/2)*(r**2)))
return x
R = float(10)
N = int(100)
ft = np.zeros(N)
fta = np.zeros(N)
dr = R/N
dk = math.pi/R
print ("\tk \t\t\t Discrete \t\t\t Analytic")
for j in range (1, N):
kj = j*dk
#Discrete Transform
sum = 0
for i in range(1, N):
ri = i*dr
sum = sum + (dr*ri*(F(ri))*(math.sin(kj*ri)))
ft[j] = ((4*math.pi)/kj)*sum
#Analytic Transform
fta[j] = FT(kj)
#Print results
print(kj, f" \t\t{ft[j]:.10E} \t\t{fta[j]:.10E}")
And these are the first few results:
k Discrete Analytic
0.3141592653589793 1.4991263193E+01 1.4991263193E+01
0.6283185307179586 1.2928362116E+01 1.2928362116E+01
0.9424777960769379 1.0101494686E+01 1.0101494686E+01
1.2566370614359172 7.1509645344E+00 7.1509645344E+00
1.5707963267948966 4.5864901093E+00 4.5864901093E+00
I'm currently trying to calculate the Coppock Curve for a strategy i'm making in python.
I've written it like this(ROC1 is the 11 length and the ROC2 is the 14 length):
final = wma_onehr*(rocOne_onehr+rocTwo_onehr)
I know my values are correct but this is the only calculation for it and it does not match with tradingview at all. For instance when I run it I get
ROC1: -1.094
ROC2: -0.961
WMA: 7215.866
And my answer is -15037.864744
While Tradingview is at -0.9
These values are know where near close and i'm just wondering why I have not found a way to get a value like that of any kind. (I'm using taapio api if anyones wondering)
Take a look at below function. Note that data_array that is passed to function is a one dimensional numpy array that contains close prices of financial asset.
import numpy as np
def coppock_curve(data_array, sht_roc_length=11, long_roc_length=14, curve_length=10): # Coppock Curve
"""
:param sht_roc_length: Short Rate of Change length
:param long_roc_length: Long Rate of Change length
:param curve_length: Coppock Curve Line length
:return: Coppock oscillator values
"""
data_array = data_array[-(curve_length + max(sht_roc_length, long_roc_length, curve_length) + 1):]
# Calculation of short rate of change
roc11 = (data_array[-(curve_length + 1):] - data_array[-(curve_length + sht_roc_length + 1):-sht_roc_length]) /\
data_array[-(curve_length + sht_roc_length + 1):-sht_roc_length] * 100
roc14 = (data_array[-(curve_length + 1):] - data_array[:-long_roc_length]) / data_array[:-long_roc_length] * 100
sum_values = roc11 + roc14 # calculation of long rate of change
curve = np.convolve(sum_values, np.arange(1, curve_length + 1, dtype=int)[::-1], 'valid') / \
np.arange(1, curve_length + 1).sum() # calculation of coppock curve line
I am trying to use SVD and an Eigendecomposition for some data analysis using Dynamic Mode Decomposition. I am running into a simple problem of getting different results from Matlab and Python. I'm confused and don't know why Python is giving me wrong results/matrix values but everything looks (I think IS) correct.
So instead of using real data this time and looking at large data sets, I generated data. I will try to look at an eigenvalue plot after the eigendecomposition. I also use a delay embedding for the data because I will work with a data vector which is only (2x100), so I will perform a type of Hankel matrix to enrich the data with 10 delays.
clear all; close all; clc;
data = linspace(1,100);
data2 = linspace(2,101);
data = [data;data2];
numDelays = 10;
relTol= 10^-6;
%% Create first and second snap shot matrices for DMD. Any columns with missing
% data are not used.
disp('Constructing Data Matricies:')
X = zeros((numDelays+1)*size(data,1),size(data,2)-(numDelays+1));
Y = zeros(size(X));
for i = 1:numDelays+1
X(1 + (i-1)*size(data,1):i*size(data,1),:) = ...
data(:,(i):size(data,2)-(numDelays+1) + (i-1));
Y(1 + (i-1)*size(data,1):i*size(data,1),:) = ...
data(:,(i+1):size(data,2)-(numDelays+1) + (i));
end
[U,S,V] = svd(X);
r = find(diag(S)>S(1,1)*relTol,1,'last');
disp(['DMD subspace dimension:',num2str(r)])
U = U(:,1:r);
S = S(1:r,1:r);
V = V(:,1:r);
Atil = (U'*Y)*V*(S^-1);
[what,lambda] = eig(Atil);
Phi = (Y*V)*(S^-1)*what;
Keigs = diag(lambda);
tt = linspace(0,2*pi,101);
figure;
plot(real(Keigs),imag(Keigs),'ro')
hold on
plot(cos(tt),sin(tt),'--')
import scipy.io as sc
import math as m
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import sys
from numpy import dot, multiply, diag, power, pi, exp, sin, cos, cosh, tanh, real, imag
from scipy.linalg import expm, sinm, cosm, fractional_matrix_power, svd, eig, inv
def dmd(X, Y, relTol):
U2,Sig2,Vh2 = svd(X, False) # SVD of input matrix
S = np.zeros((Sig2.shape[0], Sig2.shape[0])) # Create S matrix with zeros based on Diag of S
np.fill_diagonal(S, Sig2) # Fill diagonal of S matrix with the nonzero values
r = np.count_nonzero(np.diag(S) > S[0,0] * relTol) # rank truncation
U = U2[:,:r]
Sig = diag(Sig2)[:r,:r] #GOOD =)
V = Vh2.conj().T[:,:r]
Atil = dot(dot(dot(U.conj().T, Y), V), inv(Sig)) # build A tilde
print(Atil)
mu,W = eig(Atil)
Phi = dot(dot(dot(Y, V), inv(Sig)), W) # build DMD modes
return mu, Phi
data = np.array([(np.linspace(1,100,100)),(np.linspace(2,101,100))])
Data = np.array(data)
######### Choose number of Delays ###########
# observable (coordinates of feature points). Setting to zero means only
# experimental observables will be used.
numDelays = 10
relTol = 10**-6
########## Create Data Matrices for DMD ###############
# Create first and second snap shot matrices for DMD. Any columns with missing
# data are not used.
X = np.zeros(((numDelays + 1) * data.shape[0], data.shape[1] - (numDelays + 1)))
Y = np.zeros(X.shape)
for i in range(1, numDelays + 2):
X[0 + (i - 1) * Data.shape[0]:i * Data.shape[0], :] = Data[:, (i):Data.shape[1] - (numDelays + 1) + (i - 0)]
Y[0 + (i - 1) * Data.shape[0]:i * Data.shape[0], :] = Data[:, (i + 0):Data.shape[1] - (numDelays + 1) + (i)]
Keigs, Phi = dmd(X, Y, relTol)
tt = np.linspace(0,2*np.pi,101)
plt.figure()
plt.plot(np.cos(tt),np.sin(tt),'--')
plt.plot(Keigs.real,Keigs.imag,'ro')
plt.title('DMD Eigenvalues')
plt.xlabel(r'Real $\ lambda$')
plt.ylabel(r'Imaginary $\ lambda$')
# plt.axes().set_aspect('equal')
plt.show()
So in matlab and python, I get my eigenvalues to all sit on the unit circle (as expect) and I get precisely one, sitting at 1.
So the problem comes when I look at the matrices from SVD, they appear to have different values. The only matrix that is the same is the 'S or Sig' matrix. The rest will differ a number or +/- sign. The biggest thing that peaked my interest is the Atil matrix.
In matlab, it looks like,
[1.0157, -0.3116; 7.91229e-4, 0.9843]
And python it looks like,
[1.0, -4.508e-15; -4.439e-18, 1.0]
Now this may look slightly off due to numerical error possibly but when I look at real data and these differ, it messes up my analysis.
SVD of a non-square matrix is not unique in U and V. Even if you have a square matrix with non-zero, non-degenerate singular values, singular vectors in U and V are only unique up to a sign factor.
https://math.stackexchange.com/questions/644327/how-unique-on-non-unique-are-u-and-v-in-singular-value-decomposition-svd
Moreover, Matlab (LAPACK + BLAS) and scipy.linalg.svd may use different algorithms for SVD.
This can lead to the differences you have experienced.
I implemented the KS-Test to test which Distributions are better fitting together. At this moment, I gave the CDFs as input, because the standard KS-Test involves computing the maximum difference between the CDFs of the function. I just wanted to know if this is the right way to do it. Or should I use the PDFS as input? The statistics values and p-values seem good for me. With the critical value of the KS-Test i can chose which Hypothesis tests I should not reject.
Code example
gammafit = stats.gamma.fit(h4)
pdf_gamma = stats.gamma.pdf(lnspc, *gammafit)
cdf_gamma = stats.gamma.cdf(lnspc, *gammafit)
plt.plot(lnspc, pdf_gamma, label="Gamma")
gamma_kstest999 = stats.ks_2samp(np.cumsum(n4), cdf_gamma)
You should use the pdfs as input. ks_2samp takes as input the pdfs and creates the cdfs inside the code. According to the function source code:
data1 = np.sort(data1)
data2 = np.sort(data2)
n1 = data1.shape[0]
n2 = data2.shape[0]
data_all = np.concatenate([data1, data2])
cdf1 = np.searchsorted(data1, data_all, side='right') / (1.0*n1)
cdf2 = np.searchsorted(data2, data_all, side='right') / (1.0*n2)
d = np.max(np.absolute(cdf1 - cdf2))
# Note: d absolute not signed distance
en = np.sqrt(n1 * n2 / float(n1 + n2))
try:
prob = distributions.kstwobign.sf((en + 0.12 + 0.11 / en) * d)
except:
prob = 1.0
return Ks_2sampResult(d, prob)
The cdf1 and cdf2 variables represent the produced cumulative distributions.