Overlap fraction between two numeric ranges - python

I want to compute the overlap fraction of two numeric ranges. Let me illustrate my question with an example since I believe that it will be easier to understand.
Lets say that I have two numeric ranges:
A = [1,100]
B = [25,100]
What I want to know (and code) is how much is B overlapping A and viceversa (how much is A overlapping B)
In this case, A overlaps B (as a fraction of B) by 100% and B overlaps A (as a fraction of A) by 75% percent.
I have try been trying to code this in python, but I am struggling and I can't find the proper solution for computing both fractions
What I have been able to achieve so far is the following:
Given the start and end of both numeric ranges, I have been able to figure out if the two numerical ranges overlap (from other stackoverflow post)
I have done this with the following code
def is_overlapping(x1,x2,y1,y2):
return max(x1,y1) <= min(x2,y2)
thanks!

Here's a fast solution without for loops:
def overlapping(x1,x2,y1,y2):
#A = [x1,x2]
#B = [y1,y1]
# Compute the B over A
if(x1 <= y1 and x2 >= y2): # Total overlapping
return 1
elif(x2 < y1 or y2 < x1):
return 0
elif(x2 == y1 or x1 == y2):
return 1/float(y2 - y1 + 1)
return (min(x2,y2) - max(x1,y1))/float(y2 - y1)

One (less efficient) way to do this is by using sets.
If you set up ranges
A = range(1,101)
B = range(25, 101)
then you can find your fractions as follows:
len(set(A)&set(B))/float(len(set(B)))
and
len(set(A)&set(B))/float(len(set(A)))
giving 1.0 and 0.76.
There are 76 points in B that are also in A (since your ranges appear to be inclusive).
There are more efficient ways to do this using some mathematics as the other answers show, but this is general purpose.

I believe there are countless ways of solving this problem. The first one that came into my mind is making best use of the sum function which can also sum up over an iterable:
a = range(1,100)
b = range(25,100)
sum_a = sum(1 for i in b if i in a)
sum_b = sum(1 for i in a if i in b)
share_a = sum_a*100 / len(b)
share_b = sum_b*100 / len(a)
print(share_a, share_b)
>>> 100 75
This might be a bit more robus, e.g. when you are not working with ranges but with unsorted lists.

Here's my solution using numpy & python3
import numpy as np
def my_example(A,B):
# Convert to numpy arrays
A = np.array(A)
B = np.array(B)
# determine which elements are overlapping
overlapping_elements=np.intersect1d(A, B)
# determine how many there are
coe=overlapping_elements.size
#return the ratios
return coe/A.size , coe/B.size
# Generate two test lists
a=[*range(1,101)]
b=[*range(25,101)]
# Call the example & print the results
x,y = my_example(a,b) # returns ratios, multiply by 100 for percentage
print(x,y)

I have assumed both lower and upper bounds are included in the range. Here is my way of calculating overlapping distance with respect to other:
def is_valid(x):
try:
valid = (len(x) == 2) and (x[0] <= x[1])
except:
valid = False
finally:
return valid
def is_overlapping(x,y):
return max(x[0],y[0]) <= min(x[1],y[1])
def overlapping_percent(x,y):
if(is_valid(x) and is_valid(y)) == False:
raise ValueError("Invalid range")
if is_overlapping(x,y):
overlapping_distance = min(x[1],y[1]) - max(x[0],y[0]) + 1
width_x = x[1] - x[0] + 1
width_y = y[1] - y[0] + 1
overlap_x = overlapping_distance * 100.0/width_y
overlap_y = overlapping_distance *100.0/width_x
return (overlap_x, overlap_y)
return (0,0);
if __name__ == '__main__':
try:
print(overlapping_percent((1,100),(26,100)))
print(overlapping_percent((26,100),(1,100)))
print(overlapping_percent((26,50),(1,100)))
print(overlapping_percent((1,100),(26,50)))
print(overlapping_percent((1,100),(200,300)))
print(overlapping_percent((26,150),(1,100)))
print(overlapping_percent((126,50),(1,100)))
except Exception as e:
print(e)
Output:
(100.0, 75.0)
(75.0, 100.0)
(25.0, 100.0)
(100.0, 25.0)
(0, 0)
(60.0, 75.0)
Invalid range
I hope it helps.

Related

minimal absolute value of the difference between A[i] and B[i] (array A is strictly increasing, array B is strictly decreasing)

Given two sequences A and B of the same length: one is strictly increasing, the other is strictly decreasing.
It is required to find an index i such that the absolute value of the difference between A[i] and B[i] is minimal. If there are several such indices, the answer is the smallest of them. The input sequences are standard Python arrays. It is guaranteed that they are of the same length. Efficiency requirements: Asymptotic complexity: no more than the power of the logarithm of the length of the input sequences.
I have implemented index lookup using the golden section method, but I am confused by the use of floating point arithmetic. Is it possible to somehow improve this algorithm so as not to use it, or can you come up with a more concise solution?
import random
import math
def peak(A,B):
def f(x):
return abs(A[x]-B[x])
phi_inv = 1 / ((math.sqrt(5) + 1) / 2)
def cal_x1(left,right):
return right - (round((right-left) * phi_inv))
def cal_x2(left,right):
return left + (round((right-left) * phi_inv))
left, right = 0, len(A)-1
x1, x2 = cal_x1(left, right), cal_x2(left,right)
while x1 < x2:
if f(x1) > f(x2):
left = x1
x1 = x2
x2 = cal_x1(x1,right)
else:
right = x2
x2 = x1
x1 = cal_x2(left,x2)
if x1 > 1 and f(x1-2) <= f(x1-1): return x1-2
if x1+2 < len(A) and f(x1+2) < f(x1+1): return x1+2
if x1 > 0 and f(x1-1) <= f(x1): return x1-1
if x1+1 < len(A) and f(x1+1) < f(x1): return x1+1
return x1
#value check
def make_arr(inv):
x = set()
while len(x) != 1000:
x.add(random.randint(-10000,10000))
x = sorted(list(x),reverse = inv)
return x
x = make_arr(0)
y = make_arr(1)
needle = 1000000
c = 0
for i in range(1000):
if abs(x[i]-y[i]) < needle:
c = i
needle = abs(x[i]-y[i])
print(c)
print(peak(x,y))
Approach
The poster asks about alternative, simpler solutions to posted code.
The problem is a variant of Leetcode Problem 852, where the goal is to find the peak index in a moutain array. We convert to a peak, rather than min, by computing the negative of the abolute difference. Our aproach is to modify this Python solution to the Leetcode problem.
Code
def binary_search(x, y):
''' Mod of https://walkccc.me/LeetCode/problems/0852/ to use function'''
def f(m):
' Absoute value of difference at index m of two arrays '
return -abs(x[m] - y[m]) # Make negative so we are looking for a peak
# peak using binary search
l = 0
r = len(arr) - 1
while l < r:
m = (l + r) // 2
if f(m) < f(m + 1): # check if increasing
l = m + 1
else:
r = m # was decreasing
return l
Test
def linear_search(A, B):
' Linear Search Method '
values = [abs(ai-bi) for ai, bi in zip(A, B)]
return values.index(min(values)) # linear search
def make_arr(inv):
random.seed(10) # added so we can repeat with the same data
x = set()
while len(x) != 1000:
x.add(random.randint(-10000,10000))
x = sorted(list(x),reverse = inv)
return x
# Create data
x = make_arr(0)
y = make_arr(1)
# Run search methods
print(f'Linear Search Solution {linear_search(x, y)}')
print(f'Golden Section Search Solution {peak(x, y)}') # posted code
print(f'Binary Search Solution {binary_search(x, y)}')
Output
Linear Search Solution 499
Golden Section Search Solution 499
Binary Search Solution 499

Let n be a square number. Using Python, how we can efficiently calculate natural numbers y up to a limit l such that n+y^2 is again a square number?

Using Python, I would like to implement a function that takes a natural number n as input and outputs a list of natural numbers [y1, y2, y3, ...] such that n + y1*y1 and n + y2*y2 and n + y3*y3 and so forth is again a square.
What I tried so far is to obtain one y-value using the following function:
def find_square(n:int) -> tuple[int, int]:
if n%2 == 1:
y = (n-1)//2
x = n+y*y
return (y,x)
return None
It works fine, eg. find_square(13689) gives me a correct solution y=6844. It would be great to have an algorithm that yields all possible y-values such as y=44 or y=156.
Simplest slow approach is of course for given N just to iterate all possible Y and check if N + Y^2 is square.
But there is a much faster approach using integer Factorization technique:
Lets notice that to solve equation N + Y^2 = X^2, that is to find all integer pairs (X, Y) for given fixed integer N, we can rewrite this equation to N = X^2 - Y^2 = (X + Y) * (X - Y) which follows from famous school formula of difference of squares.
Now lets rename two factors as A, B i.e. N = (X + Y) * (X - Y) = A * B, which means that X = (A + B) / 2 and Y = (A - B) / 2.
Notice that A and B should be of same odditiy, either both odd or both even, otherwise in last formulas above we can't have whole division by 2.
We will factorize N into all possible pairs of two factors (A, B) of same oddity. For fast factorization in code below I used simple to implement but yet quite fast algorithm Pollard Rho, also two extra algorithms were needed as a helper to Pollard Rho, one is Fermat Primality Test (which allows fast checking if number is probably prime) and second is Trial Division Factorization (which helps Pollard Rho to factor out small factors, which could cause Pollard Rho to fail).
Pollard Rho for composite number has time complexity O(N^(1/4)) which is very fast even for 64-bit numbers. Any faster factorization algorithm can be chosen if needed a bigger space to be searched. My fast algorithm time is dominated by speed of factorization, remaining part of algorithm is blazingly fast, just few iterations of loop with simple formulas.
If your N is a square itself (hence we know its root easily), then Pollard Rho can factor N even much faster, within O(N^(1/8)) time. Even for 128-bit numbers it means very small time, 2^16 operations, and I hope you're solving your task for less than 128 bit numbers.
If you want to process a range of possible N values then fastest way to factorize them is to use techniques similar to Sieve of Erathosthenes, using set of prime numbers, it allows to compute all factors for all N numbers within some range. Using Sieve of Erathosthenes for the case of range of Ns is much faster than factorizing each N with Pollard Rho.
After factoring N into pairs (A, B) we compute (X, Y) based on (A, B) by formulas above. And output resulting Y as a solution of fast algorithm.
Following code as an example is implemented in pure Python. Of course one can use Numba to speed it up, Numba usually gives 30-200 times speedup, for Python it achieves same speed as optimized C++. But I thought that main thing here is to implement fast algorithm, Numba optimizations can be done easily afterwards.
I added time measurement into following code. Although it is pure Python still my fast algorithm achieves 8500x times speedup compared to regular brute force approach for limit of 1 000 000.
You can change limit variable to tweak amount of searched space, or num_tests variable to tweak amount of different tests.
Following code implements both solutions - fast solution find_fast() described above plus very tiny brute force solution find_slow() which is very slow as it scans all possible candidates. This slow solution is only used to compare correctness in tests and compare speedup.
Code below uses nothing except few standard Python library modules, no external modules were used.
Try it online!
def find_slow(N):
import math
def is_square(x):
root = int(math.sqrt(float(x)) + 0.5)
return root * root == x, root
l = []
for y in range(N):
if is_square(N + y ** 2)[0]:
l.append(y)
return l
def find_fast(N):
import itertools, functools
Prod = lambda it: functools.reduce(lambda a, b: a * b, it, 1)
fs = factor(N)
mfs = {}
for e in fs:
mfs[e] = mfs.get(e, 0) + 1
fs = sorted(mfs.items())
del mfs
Ys = set()
for take_a in itertools.product(*[
(range(v + 1) if k != 2 else range(1, v)) for k, v in fs]):
A = Prod([p ** t for (p, _), t in zip(fs, take_a)])
B = N // A
assert A * B == N, (N, A, B, take_a)
if A < B:
continue
X = (A + B) // 2
Y = (A - B) // 2
assert N + Y ** 2 == X ** 2, (N, A, B, X, Y)
Ys.add(Y)
return sorted(Ys)
def trial_div_factor(n, limit = None):
# https://en.wikipedia.org/wiki/Trial_division
fs = []
while n & 1 == 0:
fs.append(2)
n >>= 1
all_checked = False
for d in range(3, (limit or n) + 1, 2):
if d * d > n:
all_checked = True
break
while True:
q, r = divmod(n, d)
if r != 0:
break
fs.append(d)
n = q
if n > 1 and all_checked:
fs.append(n)
n = 1
return fs, n
def fermat_prp(n, trials = 32):
# https://en.wikipedia.org/wiki/Fermat_primality_test
import random
if n <= 16:
return n in (2, 3, 5, 7, 11, 13)
for i in range(trials):
if pow(random.randint(2, n - 2), n - 1, n) != 1:
return False
return True
def pollard_rho_factor(n):
# https://en.wikipedia.org/wiki/Pollard%27s_rho_algorithm
import math, random
fs, n = trial_div_factor(n, 1 << 7)
if n <= 1:
return fs
if fermat_prp(n):
return sorted(fs + [n])
for itry in range(8):
failed = False
x = random.randint(2, n - 2)
for cycle in range(1, 1 << 60):
y = x
for i in range(1 << cycle):
x = (x * x + 1) % n
d = math.gcd(x - y, n)
if d == 1:
continue
if d == n:
failed = True
break
return sorted(fs + pollard_rho_factor(d) + pollard_rho_factor(n // d))
if failed:
break
assert False, f'Pollard Rho failed! n = {n}'
def factor(N):
import functools
Prod = lambda it: functools.reduce(lambda a, b: a * b, it, 1)
fs = pollard_rho_factor(N)
assert N == Prod(fs), (N, fs)
return sorted(fs)
def test():
import random, time
limit = 1 << 20
num_tests = 20
t0, t1 = 0, 0
for i in range(num_tests):
if (round(i / num_tests * 1000)) % 100 == 0 or i + 1 >= num_tests:
print(f'test {i}, ', end = '', flush = True)
N = random.randrange(limit)
tb = time.time()
r0 = find_slow(N)
t0 += time.time() - tb
tb = time.time()
r1 = find_fast(N)
t1 += time.time() - tb
assert r0 == r1, (N, r0, r1, t0, t1)
print(f'\nTime slow {t0:.05f} sec, fast {t1:.05f} sec, speedup {round(t0 / max(1e-6, t1))} times')
if __name__ == '__main__':
test()
Output:
test 0, test 2, test 4, test 6, test 8, test 10, test 12, test 14, test 16, test 18, test 19,
Time slow 26.28198 sec, fast 0.00301 sec, speedup 8732 times
For the easiest solution, you can try this:
import math
n=13689 #or we can ask user to input a square number.
for i in range(1,9999):
if math.sqrt(n+i**2).is_integer():
print(i)

Taylor series for log(x)

I'm trying to evaluate a Taylor polynomial for the natural logarithm, ln(x), centred at a=1 in Python. I'm using the series given on Wikipedia however when I try a simple calculation like ln(2.7) instead of giving me something close to 1 it gives me a gigantic number. Is there something obvious that I'm doing wrong?
def log(x):
n=1000
s=0
for i in range(1,n):
s += ((-1)**(i+1))*((x-1)**i)/i
return s
Using the Taylor series:
Gives the result:
EDIT: If anyone stumbles across this an alternative way to evaluate the natural logarithm of some real number is to use numerical integration (e.g. Riemann sum, midpoint rule, trapezoid rule, Simpson's rule etc) to evaluate the integral that is often used to define the natural logarithm;
That series is only valid when x is <= 1. For x>1 you will need a different series.
For example this one (found here):
def ln(x): return 2*sum(((x-1)/(x+1))**i/i for i in range(1,100,2))
output:
ln(2.7) # 0.9932517730102833
math.log(2.7) # 0.9932517730102834
Note that it takes a lot more than 100 terms to converge as x gets bigger (up to a point where it'll become impractical)
You can compensate for that by adding the logarithms of smaller factors of x:
def ln(x):
if x > 2: return ln(x/2) + ln(2) # ln(x) = ln(x/2 * 2) = ln(x/2) + ln(2)
return 2*sum(((x-1)/(x+1))**i/i for i in range(1,1000,2))
which is something you can also do in your Taylor based function to support x>1:
def log(x):
if x > 1: return log(x/2) - log(0.5) # ln(2) = -ln(1/2)
n=1000
s=0
for i in range(1,n):
s += ((-1)**(i+1))*((x-1)**i)/i
return s
These series also take more terms to converge when x gets closer to zero so you may want to work them in the other direction as well to keep the actual value to compute between 0.5 and 1:
def log(x):
if x > 1: return log(x/2) - log(0.5) # ln(x/2 * 2) = ln(x/2) + ln(2)
if x < 0.5: return log(2*x) + log(0.5) # ln(x*2 / 2) = ln(x*2) - ln(2)
...
If performance is an issue, you'll want to store ln(2) or log(0.5) somewhere and reuse it instead of computing it on every call
for example:
ln2 = None
def ln(x):
if x <= 2:
return 2*sum(((x-1)/(x+1))**i/i for i in range(1,10000,2))
global ln2
if ln2 is None: ln2 = ln(2)
n2 = 0
while x>2: x,n2 = x/2,n2+1
return ln2*n2 + ln(x)
The program is correct, but the Mercator series has the following caveat:
The series converges to the natural logarithm (shifted by 1) whenever −1 < x ≤ 1.
The series diverges when x > 1, so you shouldn't expect a result close to 1.
The python function math.frexp(x) can be used to advantage here to modify the problem so that the taylor series is working with a value close to one. math.frexp(x) is described as:
Return the mantissa and exponent of x as the pair (m, e). m is a float
and e is an integer such that x == m * 2**e exactly. If x is zero,
returns (0.0, 0), otherwise 0.5 <= abs(m) < 1. This is used to “pick
apart” the internal representation of a float in a portable way.
Using math.frexp(x) should not be regarded as "cheating" because it is presumably implemented just by accessing the bit fields in the underlying binary floating point representation. It isn't absolutely guaranteed that the representation of floats will be IEEE 754 binary64, but as far as I know every platform uses this. sys.float_info can be examined to find out the actual representation details.
Much like the other answer does you can use the standard logarithmic identities as follows: Let m, e = math.frexp(x). Then log(x) = log(m * 2e) = log(m) + e * log(2). log(2) can be precomputed to full precision ahead of time and is just a constant in the program. Here is some code illustrating this to compute the two similar taylor series approximations to log(x). The number of terms in each series was determined by trial and error rather than rigorous analysis.
taylor1 implements log(1 + x) = x1 - (1/2) * x2 + (1/3) * x3 ...
taylor2 implements log(x) = 2 * [t + (1/3) * t3 + (1/5) * t5 ...], where t = (x - 1) / (x + 1).
import math
import struct
_LOG_OF_2 = 0.69314718055994530941723212145817656807550013436025
def taylor1(x):
m, e = math.frexp(x)
log_of_m = 0
num_terms = 36
sign = 1
m_minus1_power = m - 1
for k in range(1, num_terms + 1):
log_of_m += sign * m_minus1_power / k
sign = -sign
m_minus1_power *= m - 1
return log_of_m + e * _LOG_OF_2
def taylor2(x):
m, e = math.frexp(x)
num_terms = 12
half_log_of_m = 0
t = (m - 1) / (m + 1)
t_squared = t * t
t_power = t
denominator = 1
for k in range(num_terms):
half_log_of_m += t_power / denominator
denominator += 2
t_power *= t_squared
return 2 * half_log_of_m + e * _LOG_OF_2
This seems to work well over most of the domain of log(x), but as x approaches 1 (and log(x) approaches 0) the transformation provided by x = m * 2e actually produces a less accurate result. So a better algorithm would first check if x is close to 1, say abs(x-1) < .5, and if so the just compute the taylor series approximation directly on x.
My answer is just using the Taylor series for In(x). I really hope this helps. It is simple and straight to the point.
enter image description here

Avoid using for loop in np.array operations. Python

I have these two arrays:
import numpy as np
a = np.array([0, 10, 20])
b = np.array([20, 30, 40, 50])
I'd like to add both in the following way:
for i in range (len(a)):
for j in range(len(b)):
c = a[i] + b[j]
d = delta(c, dr)
As you see for each iteration I get a value c which I pass through a function delta (see note at the end of the post).
The thing is that I want to avoid slow Python "for" loops when the arrays are huge.
One thing I could do would be:
c = np.ravel(a(-1, 1) + b)
Which is much much faster. The problem is that now c is an array, and again I would have to go throw it using a for loop.
So, do you have any idea on how I could do this without using a for loop at all.
NOTE: delta is a function I define in the following way:
def delta(r,dr):
if r >= 0.5*dr and r <= 1.5*dr:
delta = (5-3*abs(r)/dr-np.sqrt(-3*(1-abs(r)/dr)**2+1))/(6*dr)
elif r <= 0.5*dr:
delta = (1+np.sqrt(-3*(r/dr)**2+1))/(3*dr)
else:
delta = 0
return delta
Using ravel is a good idea. Note that you could also use simple array broadcasting (a[:, np.newaxis] + b[np.newaxis, :]).
For your function, you can improve this a lot because it is composed of only three particular cases. Probably the best approach is to use masking for each of those three sections.
You're starting with:
def delta(r,dr):
if r >= 0.5*dr and r <= 1.5*dr:
delta = (5-3*abs(r)/dr-np.sqrt(-3*(1-abs(r)/dr)**2+1))/(6*dr)
elif r <= 0.5*dr:
delta = (1+np.sqrt(-3*(r/dr)**2+1))/(3*dr)
else:
delta = 0
A common alternative approach would be something like:
def delta(r, dr):
res = np.zeros_like(r)
ma = (r >= 0.5*dr) & (r <= 1.5*dr) # Create first mask
res[ma] = (5-3*np.abs(r[ma])/dr[ma]-np.sqrt(-3*(1-np.abs(r[ma])/dr[ma])**2+1))/(6*dr[ma])
ma = (r <= 0.5*dr) # Create second mask
res[ma] = (1+np.sqrt(-3*(r[ma]/dr[ma])**2+1))/(3*dr[ma])
return res
Initializing to zeros handles the final else case. Also I'm assuming np.abs is faster than abs --- but I'm not actually sure...
Edit: for sparse matrices
The same basic idea should apply, but perhaps instead of using a boolean masking array, using the valid indices themselves would be better... e.g. something like:
res = scipy.sparse.coo_matrix(np.shape(r))
ma = np.where((r >= 0.5*dr) & (r <= 1.5*dr)) # Create first mask
res[ma] = ...
This is the same answer as DilithiumMatrix, but using logical functions that numpy accepts to generate the masks.
import numpy as np
def delta(r, dr):
res = np.zeros(r.shape)
mask1 = (r >= 0.5*dr) & (r <= 1.5*dr)
res[mask1] = \
(5-3*np.abs(r[mask1])/dr \
- np.sqrt(-3*(1-np.abs(r[mask1])/dr)**2+1)) \
/(6*dr)
mask2 = np.logical_not(mask1) & (r <= 0.5*dr)
res[mask2] = (1+np.sqrt(-3*(r[mask2]/dr)**2+1))/(3*dr)
return res
Assuming your two arrays (a and b) are not enormous, you could do something like this:
import itertools
a = numpy.array([1,2,3])
b = numpy.array([4,5,6])
c = numpy.sum(list(itertools.product(a, b), 1)
def func(x, y):
return x*y
numpy.vectorize(func)(c, 10)
Note that with large arrays, this simply won't work - you'll have n**2 elements in c, which means that even for smallish-seeming pairs of arrays, you'll use enormous amounts of memory. For 2 arrays with 100,000 elements each, total memory required will be in the range of 74 GB.

Finding index where one sequence is greater than another

I have two numpy arrays and I'd like to find the index at which the data in one array becomes greater than another. The following code seems to do the trick but I'm wondering if there is a better way:
# For example
import numpy as np
x=np.arange(-10,10,0.01)
f1 = x+3.0
f2 = 2*x
cross_over_index = np.nonzero(f2 > f1)[0][0]
Because your question is only meaningful if the arrays are ordered, you can find the crossover point with a binary search that will be much faster for large arrays:
# Find first index of a2 > a1
def crossover(a1, a2):
bottom = 0
top = min(len(a1) - 1, len(a2) - 1)
if a2[top] <= a1[top]:
return -1
while top > bottom:
mid = (top + bottom) >> 1
if a2[mid] > a1[mid]:
top = mid
else:
bottom = mid + 1
return top
f1 = [ (x + 20) for x in range(80) ]
f2 = [ (2 * x) for x in range(100) ]
print( crossover( f1, f2 ) )
This should (and does) print "21".
If you're looking for the solution of the equation f1(x) = f2(x), you could also do this:
np.argmin(abs(f2-f1))
# 1300
and to get x there, of course
x[np.argmin(abs(f2-f1))]
# 3.0
But note that this will only sometimes give the same answer as what you technically asked for (which in this example returns i0 --> 1301 and x0 --> 3.01 ( which is the solution plus stepsize of x).

Categories