Variable values not getting into inner for loop - python

My inner for loop is not using new values from the outer loop.
What's wrong, and how do I fix it?
import numpy as np
a = 0.0000001
b = 15.
d = 0.1
TOL = 1.0e-6
a1 = []
dd = 0.1
da1 = []
for i in range(0,10):
def f(v):
return np.cosh(d * v) - (1./v) * np.sinh(d * v) - 1.
FA = f(a)
FB = f(b)
for I in range(0,1000):
p = a + (b - a) / 2.0
FP = f(p)
if FA == 0 or (b - a)/2.0 < TOL:
break
I = I + 1
if FA * FP > 0:
a = p
FA = FP
if FA * FP < 0:
b = p
a1.append(p)
da1.append(d)
d = d + dd
print a1
print da1
Here is a second implementation. Variable d shows new values, but the inner loop keeps giving me the same result result, like it is not registering the new d value.
import numpy as np
a = 0.00001
a1 = []
dd = 0.1
da = 1.e-5
d = 0.1
yvs=[]
ds = []
EE = []
while d <= 1.:
dnew = d
print dnew
for i in range(0,1000000):
dnew = d
yv = np.cosh(dnew * a) - (1./a) * np.sinh(dnew * a) - 1.
yvs.append(yv)
a = a + da
a1.append(a)
i = i + 1
for ii in range(0,999999):
As = (a1[ii]+a1[ii+1])/2.
E = -1. * As**2
if yvs[ii]*yvs[ii+1] < 0:
EE.append(E)
print As, E
ii = ii + 1
d = dnew + dd

I deleted my earlier answer; it's not the main problem you're having.
You traced the wrong values: d and dnew do, indeed, change. However, they are not part of the data flow for the values you're worried about.
In the upper program, d depends exclusively on its starting value and increment value, both of them 0.1, and dd doesn't change. p depends exclusively on the values of a and b, which also don't change.
Yes, you do some nice work to compute FA, FB, FP -- but then you hit the bottom of the loop, you don't save them anywhere, and then you overwrite tehm on the next loop.
If the lower program, you have the same problem with As and E: you never change the parameters on which they depend (that's all in yvs, which you never print out), so the outputs are the same on every loop.
Since you are using one- and two-letter variables and haven't documented your code, I don't have a good idea of how to fix this: I have little idea what your program is supposed to do, although it appears to want to converge some computational series.

Related

Structural errors in function construction

I need to solve a non-linear function.
Problem: logical errors in the structure of the code with the refinement of the root by the method of half division. Please help me figure it out and point out errors (def utoch()).
Code structure:
I tabulate the function and write the argument and function value
from this argument into a two-dimensional array AB 10 by 2 .
Next, I separate the roots (find a small [a; b] in which there is
one root: a and b must be of different signs). I enter in the array
AB 3 by 5, such a and b in 1 and 2 columns.
Now I need to use the method of refining the roots by the method of
half division. The AB array is entered in column 3 - the root, in 4
-the value of the function from the root, 5 - the number of steps to find this root
I also attach block diagrams, but they are created for BASIC
import math
xn = 1
xk = 3
dx = 0.2
N = 10
XY = [[0.0] * 2 for b in range(10+1)]
AB = [[0.0] * 5 for q in range(3)]
def f(x):
return (math.atan(x) + math.sin(x)-2)
def TabXY():
for i in range(N+1):
x = xn + dx*i
XY[i][0] = float(round(x,1))
XY[i][1] = float(round(f(x),3))
return(XY)
print(TabXY())
def otd():
Nr = 0
for i in range(1, N+1):
if XY[i-1][1] * XY[i][1] < 0 :
AB[Nr][0] = XY[i-1][1]
AB[Nr][1] = XY[i][1]
Nr = Nr + 1
return(AB)
print(otd())
def utoch():
Nk= 3
for i in range(0,Nk):
a = AB[i][0]
b = AB[i][1]
Fa = f(a)
while abs(b-a) < 0.001:
c = float((a + b) / 2)
Fc = f(c)
if Fa * Fc < 0:
a = c
Fa = Fc
else:
b = c
AB[i][2] = c
AB[i][3] = Fc
return (AB)
print(utoch())
to utoch

How do I write a Python code for partial fraction decomposition without using "apart"?

So I am very unexperienced with Python, I know basically nothing, and our teacher gave us the task to write a code that makes a partial fraction decomposition with this function:
I don't really know how to start or even how to define that function. I tried this at first: `
def function(x):
a = (x^4)-(3*x^2)+x+5
b = (x^11)-(3*x^10)-(x^9)+(7*x^8)-(9*x^7)+(23*x^6)-(11*x^5)-(3*x^4)-(4*x^3)-(32*x^2)-16
return a/b
But our maths script says that we need to split up the denominator and then make a system of equations out of it and solve it.
So I was thinking about defining each part of the function itself and then make a function somehow like a = 7*x and use it like f(x) = b/a^7 if this works but I don't really know. We are unfortunately not allowed to use "apart" which I think is a sympy-function?
Thank you so much in advance!
Sincerely, Phie
Addition: So after a few hours of trying I figured this. But I am very sure that this is not the way to do it. Also it tells me that variable l is not defined in z and I am sure that all the others aren't as well. I don't know what to do.
def function(x):
global a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v
a = (x^4)-(3*x^2)+x+5
b = 11
c = 10
d = 9
e = 8
f = 7
g = 6
h = 5
i = 4
j = 3
k = 2
l = x**b
m = 3*x**c
n = x**d
o = 7*x**e
p = 9*x**f
q = 23*x**g
r = 11*x**h
s = 3*x**i
t = 4*x**j
u = 32*x**k
v = 16
return a/(l-m-n+o-p+q-r-s-t-u-v)
print("We are starting the partial fraction decomposition with this
function: (x^4)-(3*x^2)+x+5 / (x^11)-(3*x^10)-(x^9)+(7*x^8)-(9*x^7)+
(23*x^6)-(11*x^5)-(3*x^4)-(4*x^3)-(32*x^2)-16")
z = l-m-n+o-p+q-r-s-t-u-v
while c >= 0:
c = c-1
z = z-l
while d >= 0:
d = d-1
z = z-m
while e >= 0:
e = e-1
z = z-n
while f >= 0:
f = f-1
z = z+o
while g >= 0:
g = g-1
z = z-p
while h >= 0:
h = h-1
z = z+q
while i >= 0:
i = i-1
z = z-r
while j >= 0:
j = j-1
z = z-s
while k >= 0:
k = k-1
z = z-t
print(z)
Since I just solved this myself, here's some input:
Let poly = function() for your function, although be careful to replace ^ with **. Include both from sympy import * and from sympy.abc import a, b, c, d, e, f, g, h, i, j, k, x.
Using factor(exp) you can find all the roots of your function, use these to define the 11 terms term_1 = a/(x-2), term_2 = b/(x2-)**2, ... , term_6 = (f*x + g)/(x**2 +1), ..., term_8 = (j*x + k)/(x**2 + 1) (you get the gist). Define your_sum = term_1 + ... + term_8, eq = Eq(your_sum, poly)
Define the variable your_sum = sum(term_1, ..., term_8), and use solve_undetermined_coeffs(eq, [a,b, ..., k], x))) to get the result.

Confusing result with quadratic regression

So, I'm trying to fit some pairs of x,y data with a quadratic regression, a sample formula can be found at http://polynomialregression.drque.net/math.html.
Following is my code that does the regression using that explicit formula and using numpy inbuilt functions,
import numpy as np
x = [6.230825,6.248279,6.265732]
y = [0.312949,0.309886,0.306639472]
toCheck = x[2]
def evaluateValue(coeff,x):
c,b,a = coeff
val = np.around( a+b*x+c*x**2,9)
act = 0.306639472
error= np.abs(act-val)*100/act
print "Value = {:.9f} Error = {:.2f}%".format(val,error)
###### USing numpy######################
coeff = np.polyfit(x,y,2)
evaluateValue(coeff, toCheck)
################# Using explicit formula
def determinant(a,b,c,d,e,f,g,h,i):
# the matrix is [[a,b,c],[d,e,f],[g,h,i]]
return a*(e*i - f*h) - b*(d*i - g*f) + c*(d*h - e*g)
a = b = c = d = e = m = n = p = 0
a = len(x)
for i,j in zip(x,y):
b += i
c += i**2
d += i**3
e += i**4
m += j
n += j*i
p += j*i**2
det = determinant(a,b,c,b,c,d,c,d,e)
c0 = determinant(m,b,c,n,c,d,p,d,e)/det
c1 = determinant(a,m,c,b,n,d,c,p,e)/det
c2 = determinant(a,b,m,b,c,n,c,d,p)/det
evaluateValue([c2,c1,c0], toCheck)
######Using another explicit alternative
def determinantAlt(a,b,c,d,e,f,g,h,i):
return a*e*i - a*f*h - b*d*i +b*g*f + c*d*h - c*e*g # <- barckets removed
a = b = c = d = e = m = n = p = 0
a = len(x)
for i,j in zip(x,y):
b += i
c += i**2
d += i**3
e += i**4
m += j
n += j*i
p += j*i**2
det = determinantAlt(a,b,c,b,c,d,c,d,e)
c0 = determinantAlt(m,b,c,n,c,d,p,d,e)/det
c1 = determinantAlt(a,m,c,b,n,d,c,p,e)/det
c2 = determinantAlt(a,b,m,b,c,n,c,d,p)/det
evaluateValue([c2,c1,c0], toCheck)
This code gives this output
Value = 0.306639472 Error = 0.00%
Value = 0.308333580 Error = 0.55%
Value = 0.585786477 Error = 91.03%
As, you can see these are different from each other and third one is totally wrong. Now my questions are:
1. Why the explicit formula is giving slightly wrong result and how to improve that?
2. How numpy is giving so accurate result?
3. In the third case only by openning the parenthesis, how come the result changes so drastically?
So there are a few things that are going on here that are unfortunately plaguing the way you are doing things. Take a look at this code:
for i,j in zip(x,y):
b += i
c += i**2
d += i**3
e += i**4
m += j
n += j*i
p += j*i**2
You are building features such that the x values are not only squared, but cubed and fourth powered.
If you print out each of these values before you put them into the 3 x 3 matrix to solve:
In [35]: a = b = c = d = e = m = n = p = 0
...: a = len(x)
...: for i,j in zip(xx,y):
...: b += i
...: c += i**2
...: d += i**3
...: e += i**4
...: m += j
...: n += j*i
...: p += j*i**2
...: print(a, b, c, d, e, m, n, p)
...:
...:
3 18.744836 117.12356813829001 731.8283056811686 4572.738547313946 0.9294744720000001 5.807505391292503 36.28641270376207
When dealing with floating-point arithmetic and especially for small values, the order of operations does matter. What's happening here is that by fluke, the mix of both small values and large values that have been computed result in a value that is very small. Therefore, when you compute the determinant using the factored form and expanded form, notice how you get slightly different results but also look at the precision of the values:
In [36]: det = determinant(a,b,c,b,c,d,c,d,e)
In [37]: det
Out[37]: 1.0913403514223319e-10
In [38]: det = determinantAlt(a,b,c,b,c,d,c,d,e)
In [39]: det
Out[39]: 2.3283064365386963e-10
The determinant is on the order of 10-10! The reason why there's a discrepancy is because with floating-point arithmetic, theoretically both determinant methods should yield the same result but unfortunately in reality they are giving slightly different results and this is due to something called error propagation. Because there are a finite number of bits that can represent a floating-point number, the order of operations changes how the error propagates, so even though you are removing the parentheses and the formulas do essentially match, the order of operations to get to the result are now different. This article is an essential read for any software developer who deals with floating-point arithmetic regularly: What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Therefore, when you're trying to solve the system with Cramer's Rule, inevitably when you divide by the main determinant in your code, even though the change is on the order of 10-10, the change is negligible between the two methods but you will get very different results because you're dividing by this number when solving for the coefficients.
The reason why NumPy doesn't have this problem is because they solve the system by least-squares and the pseudo-inverse and not using Cramer's Rule. I would not recommend using Cramer's Rule to find regression coefficients mostly due to experience and that there are more robust ways of doing it.
However to solve your particular problem, it's good to normalize the data so that the dynamic range is now centered at 0. Therefore, the features you use to construct your coefficient matrix are more sensible and thus the computational process has an easier time dealing with the data. In your case, something as simple as subtracting the data with the mean of the x values should work. As such, if you have new data points you want to predict, you must subtract by the mean of the x data first prior to doing the prediction.
Therefore at the beginning of your code, perform mean subtraction and regress on this data. I've showed you where I've modified the code given your source above:
import numpy as np
x = [6.230825,6.248279,6.265732]
y = [0.312949,0.309886,0.306639472]
# Calculate mean
me = sum(x) / len(x)
# Make new dataset that is mean subtracted
xx = [pt - me for pt in x]
#toCheck = x[2]
# Data point to check is now mean subtracted
toCheck = x[2] - me
def evaluateValue(coeff,x):
c,b,a = coeff
val = np.around( a+b*x+c*x**2,9)
act = 0.306639472
error= np.abs(act-val)*100/act
print("Value = {:.9f} Error = {:.2f}%".format(val,error))
###### USing numpy######################
coeff = np.polyfit(xx,y,2) # Change
evaluateValue(coeff, toCheck)
################# Using explicit formula
def determinant(a,b,c,d,e,f,g,h,i):
# the matrix is [[a,b,c],[d,e,f],[g,h,i]]
return a*(e*i - f*h) - b*(d*i - g*f) + c*(d*h - e*g)
a = b = c = d = e = m = n = p = 0
a = len(x)
for i,j in zip(xx,y): # Change
b += i
c += i**2
d += i**3
e += i**4
m += j
n += j*i
p += j*i**2
det = determinant(a,b,c,b,c,d,c,d,e)
c0 = determinant(m,b,c,n,c,d,p,d,e)/det
c1 = determinant(a,m,c,b,n,d,c,p,e)/det
c2 = determinant(a,b,m,b,c,n,c,d,p)/det
evaluateValue([c2,c1,c0], toCheck)
######Using another explicit alternative
def determinantAlt(a,b,c,d,e,f,g,h,i):
return a*e*i - a*f*h - b*d*i +b*g*f + c*d*h - c*e*g # <- barckets removed
a = b = c = d = e = m = n = p = 0
a = len(x)
for i,j in zip(xx,y): # Change
b += i
c += i**2
d += i**3
e += i**4
m += j
n += j*i
p += j*i**2
det = determinantAlt(a,b,c,b,c,d,c,d,e)
c0 = determinantAlt(m,b,c,n,c,d,p,d,e)/det
c1 = determinantAlt(a,m,c,b,n,d,c,p,e)/det
c2 = determinantAlt(a,b,m,b,c,n,c,d,p)/det
evaluateValue([c2,c1,c0], toCheck)
When I run this, we now get:
In [41]: run interp_test
Value = 0.306639472 Error = 0.00%
Value = 0.306639472 Error = 0.00%
Value = 0.306639472 Error = 0.00%
As some final reading for you, this is a similar problem that someone else encountered which I addressed in their question: Fitting a quadratic function in python without numpy polyfit. The summary is that I advised them not to use Cramer's Rule and to use least-squares through the pseudo-inverse. I showed them how to get exactly the same results without using numpy.polyfit. Also, using least-squares generalizes where if you have more than 3 points, you can still fit a quadratic through your points so that the model has the smallest error possible.

How can I improve my karatsuba Logic so that my code works?

Can't get my Karatsuba algorithm to work properly. There is an infinite loop in my code. I think the issue starts right after:
q = str(int(c) + int(d))
because None is the Return value. Why is None being returned?
My investigation so far has led me to conclude that my infinite loop is:
pq = Karatsuba(p, q)
Rejigging my code but nothing has worked so far.
def Karatsuba( x, y):
'''
Input: Two n-digit positive integers. N must be a power of two.
Output: The product of x and y.
'''
if len(x) == 1 and len(y) == 1:
return int(x) * int(y)
else:
split = len(x) // 2
a = x[:split]
b = x[split:]
c = y[:split]
d = y[split:]
ac = Karatsuba(a, c)
bd = Karatsuba(b, d)
p = str(int(a) + int(b))
q = str(int(c) + int(d))
pq = Karatsuba(p, q)
adbc = pq - ac - bd
return 10**len(x) * ac + (10**split) * adbc + bd
print(Karatsuba('1234', '5678'))
The answer produced should be 7006652

Speeding up arithmetic with Python Decimal library

I am trying to run a function that is similar to Google's PageRank algorithm (for non-commercial purposes, of course). Here is the Python code; note that a[0] is the only thing that matters here, and a[0] contains an n x n matrix such as [[0,1,1],[1,0,1],[1,1,0]]. Also, you can find where I got this code from on Wikipedia:
def GetNodeRanks(a): # graph, names, size
numIterations = 10
adjacencyMatrix = copy.deepcopy(a[0])
b = [1]*len(adjacencyMatrix)
tmp = [0]*len(adjacencyMatrix)
for i in range(numIterations):
for j in range(len(adjacencyMatrix)):
tmp[j] = 0
for k in range(len(adjacencyMatrix)):
tmp[j] = tmp[j] + adjacencyMatrix[j][k] * b[k]
norm_sq = 0
for j in range(len(adjacencyMatrix)):
norm_sq = norm_sq + tmp[j]*tmp[j]
norm = math.sqrt(norm_sq)
for j in range(len(b)):
b[j] = tmp[j] / norm
print b
return b
When I run this implementation (on a matrix much larger than a 3 x 3 matrix, n.b.), it does not yield enough precision to calculate the ranks in a way that allows me to compare them usefully. So I tried this instead:
from decimal import *
getcontext().prec = 5
def GetNodeRanks(a): # graph, names, size
numIterations = 10
adjacencyMatrix = copy.deepcopy(a[0])
b = [Decimal(1)]*len(adjacencyMatrix)
tmp = [Decimal(0)]*len(adjacencyMatrix)
for i in range(numIterations):
for j in range(len(adjacencyMatrix)):
tmp[j] = Decimal(0)
for k in range(len(adjacencyMatrix)):
tmp[j] = Decimal(tmp[j] + adjacencyMatrix[j][k] * b[k])
norm_sq = Decimal(0)
for j in range(len(adjacencyMatrix)):
norm_sq = Decimal(norm_sq + tmp[j]*tmp[j])
norm = Decimal(norm_sq).sqrt
for j in range(len(b)):
b[j] = Decimal(tmp[j] / norm)
print b
return b
Even at this unhelpfully low precision, the code was extremely slow and never finished running in the time I sat waiting for it to run. Previously, the code was quick but insufficiently precise.
Is there a sensible/easy way to make the code run quickly and precisely at the same time?
Few tips for speeding up:
optimize code inside of loops
move all things out of inner loop up, if possible.
do not recompute, what is already known, use variables
do not do things, which are not necessary, skip them
consider using list comprehension, it is often a bit faster
stop optimizing as soon as it gets acceptable speed
Walking through your code:
from decimal import *
getcontext().prec = 5
def GetNodeRanks(a): # graph, names, size
# opt: pass in directly a[0], you do not use the rest
numIterations = 10
adjacencyMatrix = copy.deepcopy(a[0])
#opt: why copy.deepcopy? You do not modify adjacencyMatric
b = [Decimal(1)]*len(adjacencyMatrix)
# opt: You often call Decimal(1) and Decimal(0), it takes some time
# do it only once like
# dec_zero = Decimal(0)
# dec_one = Decimal(1)
# prepare also other, repeatedly used data structures
# len_adjacencyMatrix = len(adjacencyMatrix)
# adjacencyMatrix_range = range(len_ajdacencyMatrix)
# Replace code with pre-calculated variables yourself
tmp = [Decimal(0)]*len(adjacencyMatrix)
for i in range(numIterations):
for j in range(len(adjacencyMatrix)):
tmp[j] = Decimal(0)
for k in range(len(adjacencyMatrix)):
tmp[j] = Decimal(tmp[j] + adjacencyMatrix[j][k] * b[k])
norm_sq = Decimal(0)
for j in range(len(adjacencyMatrix)):
norm_sq = Decimal(norm_sq + tmp[j]*tmp[j])
norm = Decimal(norm_sq).sqrt #is this correct? I woudl expect .sqrt()
for j in range(len(b)):
b[j] = Decimal(tmp[j] / norm)
print b
return b
Now few samples of how can be list processing optimized in Python.
Using sum, change:
norm_sq = Decimal(0)
for j in range(len(adjacencyMatrix)):
norm_sq = Decimal(norm_sq + tmp[j]*tmp[j])
to:
norm_sq = sum(val*val for val in tmp)
A bit of list comprehension:
Change:
for j in range(len(b)):
b[j] = Decimal(tmp[j] / norm)
change to:
b = [Decimal(tmp_itm / norm) for tmp_itm in tmp]
If you get this coding style, you will be able optimizing the initial loops too and will probably find, that some of pre-calculated variables are becoming obsolete.

Categories