Find 3 of the lowest common multiples of 3 numbers - python

def mult_comun(x, y, z):
mult_co = []
mult = mult_com(x, y, z)
for i in range(1, 4):
mult *= i
mult_co.append(mult)
return mult_co
print(mult_comun(a, b, c))
This is the code I wrote, but I'm not sure it's working right (I don't think that the math in this case works like this. I mean, multiplying the least common multiple by 2, 3 and 4)
(this is the mult_com() function I defined earlier, used for finding the least common multiple)
def mult_com(x, y, z):
mult = (x * y* z) // (div_com(x, y, z) ** 2)
return mult
print("Cel mai mic multiplu comun este ", mult_com(a, b, c))

Ok, I found the math.lcm() function, so the mult_com() is edited now into
def mult_com(x, y, z):
mult = math.lcm(math.lcm(x, y), z)
return mult
Again, I am not saying that the mult_comun() is not working, it's just that I am not sure that that is the best / correct way of finding what I'm looking for.

There is a neat trick to finding the least common multiple (LCM). In mathematics,
LCM * HCF = Product(x, y)
We can use this idea and find the LCM.
# Function to find HCF the Using the Euclidian algorithm is as follows
def compute_hcf(x, y):
while(y):
x, y = y, x % y
return x
Now, taking the product of numbers and dividing it by its HCF would give you the LCM;
def compute_lcm(a, b):
return a * b / compute_hcf(a, b)

Related

Python round() to remove trailing zeros

The code below is used to find the x* using Gradient Descent and the problem here is that the final result of req8 is -1.00053169969469 and when I round it, it results in -1.00000000000000.
How can I round to get -1.0 or -1.00 instead, without using any other module?
from sympy import *
import numpy as np
global x, y, z, t
x, y, z, t = symbols("x, y, z, t")
def req8(f, eta, xi, tol):
dx = diff(f(x), x)
arrs = [xi]
for i in range(100):
x_star = arrs[-1] - eta * round(dx.subs(x, arrs[-1]), 7)
if abs(dx.subs(x, x_star)) < tol:
break
arrs.append(x_star)
print(arrs[-1])
print(round(arrs[-1], 2))
def f_22(x):
return x**2 + 2*x - 1
req8(f_22, 0.1, -5, 1e-3)
Use the format function to print your rounded result with the desired number of digits (2, in the example below):
print("{:.2f}".format(round(arrs[-1], 2)))
EDIT: as pointed out by #SultanOrazbayev, rounding is no longer necessary here and you may print your result using the following expression:
print("{:.2f}".format(arrs[-1]))

How to minimise a sum of two functions allowing one of the arguments to vary over both functions?

I have two functions f(x,y,z) and g(x,y,z). I want to minimise the sum
h(x,y,z) = f(x,y,z) + g(x,y,z), allowing x to be variable over both functions f and g.
I can minimise both these functions separately or together using scipy.optimise.minimise, which basically calculates the values of f + g (or h) at a bunch of x, y and z values, and then returns me the values (x, y, z) for which f + g is minimum.
What happens here is that : both f and g are evaluated at same values of (x, y, z), but I want one of the arguments (say x) to vary over f and g.
This is a rough outline of what I am trying:
def f(x,y,z):
return scalar
def g(x,y,z):
return another_scalar
def h(theta):
x, y, z = theta
return f(x,y,z) + g(x,y,z)
def bestfit(guess, method='Nelder-Mead'):
result = op.minimize(h,
guess,
method=method,
options={'maxfev': 5000,
'disp': False})
if not result.success:
print('Optimisation did not converge.')
return result
g = [x0, y0, z0]
bf = bestfit(g, method='Nelder-Mead')
print(bf)
I am not sure if I can do it using scipy.optimise. Can I? Or is there some other python module I can use?
My first thought would be to define new functions, say a and b, with fixed values of y and z, such that your new functions are a(x) = f(x, y0, z0) and b(x) = g(x, y0, z0) and then minimize these functions.

Why does decreasing the range of np.linspace increase the accuracy of numerical integration?

Reading a tutorial on simple numerical integration (https://helloacm.com/how-to-compute-numerical-integration-in-numpy-python/), which seems to suggest that decreasing the range of the x values used in your function returns a more accurate numerical answer. The code the use is
def integrate(f, a, b, N):
x = np.linspace(a, b, N)
fx = f(x)
area = np.sum(fx)*(b-a)/N
return area
integrate(np.sin, 0, np.pi/2, 100)
This returns a value of 0.99783321217729803.
However when they modify the integration method to:
def integrate(f, a, b, N):
x = np.linspace(a+(b-a)/(2*N), b-(b-a)/(2*N), N)
fx = f(x)
area = np.sum(fx)*(b-a)/N
return area
integrate(np.sin, 0, np.pi/2, 100)
This returns a more accurate value of 1.0000102809119051. Why is this the case?
Two things:
The step width in your first integrate is not (b-a) / N, but (b-a) / (N-1).
In your first method, the error is dominated by the half-bar overshoot on the left and right, i.e., the (b-a)/(N-1)/2 * f(a) and (b-a)/(N-1)/2 * f(b). If your subtract those two, you get an accuracy in comparable to your second method.

Numerical and analytical equation solving in python or matlab

Let's suppose that I have two transcendental functions f(x, y) = 0 and g(a, b) = 0.
a and b depend on y, so if I could solve the first equation analytically for y, y = f(x), I could have the second function depending only on x and thus solving it numerically.
I prefer to use python, but if matlab is able to handle this is ok for me.
Is there a way to solve analytically trascendent functions for a variable with python/matlab? Taylor is fine too, as long as I can choose the order of approximation.
I tried running this through Sympy like so:
import sympy
j, k, m, x, y = sympy.symbols("j k m x y")
eq = sympy.Eq(k * sympy.tan(y) + j * sympy.tan(sympy.asin(sympy.sin(y) / x)), m)
eq.simplify()
which turned your equation into
Eq(m, j*sin(y)/(x*sqrt(1 - sin(y)**2/x**2)) + k*tan(y))
which, after a bit more poking, gives us
k * tan(y) + j * sin(y) / sqrt(x**2 - sin(y)**2) == m
We can find an expression for x(y) like
sympy.solve(eq, x)
which returns
[-sqrt(j**2*sin(y)**2/(k*tan(y) - m)**2 + sin(y)**2),
sqrt(j**2*sin(y)**2/(k*tan(y) - m)**2 + sin(y)**2)]
but an analytic solution for y(x) fails.

Solving non-linear equations in python

I have 4 non-linear equations with three unknowns X, Y, and Z that I want to solve for. The equations are of the form:
F(m) = X^2 + a(m)Y^2 + b(m)XYcosZ + c(m)XYsinZ
...where a, b and c are constants which are dependent on each value of F in the four equations.
What is the best way to go about solving this?
There are two ways to do this.
Use a non-linear solver
Linearize the problem and solve it in the least-squares sense
Setup
So, as I understand your question, you know F, a, b, and c at 4 different points, and you want to invert for the model parameters X, Y, and Z. We have 3 unknowns and 4 observed data points, so the problem is overdetermined. Therefore, we'll be solving in the least-squares sense.
It's more common to use the opposite terminology in this case, so let's flip your equation around. Instead of:
F_i = X^2 + a_i Y^2 + b_i X Y cosZ + c_i X Y sinZ
Let's write:
F_i = a^2 + X_i b^2 + Y_i a b cos(c) + Z_i a b sin(c)
Where we know F, X, Y, and Z at 4 different points (e.g. F_0, F_1, ... F_i).
We're just changing the names of the variables, not the equation itself. (This is more for my ease of thinking than anything else.)
Linear Solution
It's actually possible to linearize this equation. You can easily solve for a^2, b^2, a b cos(c), and a b sin(c). To make this a bit easier, let's relabel things yet again:
d = a^2
e = b^2
f = a b cos(c)
g = a b sin(c)
Now the equation is a lot simpler: F_i = d + e X_i + f Y_i + g Z_i. It's easy to do a least-squares linear inversion for d, e, f, and g. We can then get a, b, and c from:
a = sqrt(d)
b = sqrt(e)
c = arctan(g/f)
Okay, let's write this up in matrix form. We're going to translate 4 observations of (the code we'll write will take any number of observations, but let's keep it concrete at the moment):
F_i = d + e X_i + f Y_i + g Z_i
Into:
|F_0| |1, X_0, Y_0, Z_0| |d|
|F_1| = |1, X_1, Y_1, Z_1| * |e|
|F_2| |1, X_2, Y_2, Z_2| |f|
|F_3| |1, X_3, Y_3, Z_3| |g|
Or: F = G * m (I'm a geophysist, so we use G for "Green's Functions" and m for "Model Parameters". Usually we'd use d for "data" instead of F, as well.)
In python, this would translate to:
def invert(f, x, y, z):
G = np.vstack([np.ones_like(x), x, y, z]).T
m, _, _, _ = np.linalg.lstsq(G, f)
d, e, f, g = m
a = np.sqrt(d)
b = np.sqrt(e)
c = np.arctan2(g, f) # Note that `c` will be in radians, not degrees
return a, b, c
Non-linear Solution
You could also solve this using scipy.optimize, as #Joe suggested. The most accessible function in scipy.optimize is scipy.optimize.curve_fit which uses a Levenberg-Marquardt method by default.
Levenberg-Marquardt is a "hill climbing" algorithm (well, it goes downhill, in this case, but the term is used anyway). In a sense, you make an initial guess of the model parameters (all ones, by default in scipy.optimize) and follow the slope of observed - predicted in your parameter space downhill to the bottom.
Caveat: Picking the right non-linear inversion method, initial guess, and tuning the parameters of the method is very much a "dark art". You only learn it by doing it, and there are a lot of situations where things won't work properly. Levenberg-Marquardt is a good general method if your parameter space is fairly smooth (this one should be). There are a lot of others (including genetic algorithms, neural nets, etc in addition to more common methods like simulated annealing) that are better in other situations. I'm not going to delve into that part here.
There is one common gotcha that some optimization toolkits try to correct for that scipy.optimize doesn't try to handle. If your model parameters have different magnitudes (e.g. a=1, b=1000, c=1e-8), you'll need to rescale things so that they're similar in magnitude. Otherwise scipy.optimize's "hill climbing" algorithms (like LM) won't accurately calculate the estimate the local gradient, and will give wildly inaccurate results. For now, I'm assuming that a, b, and c have relatively similar magnitudes. Also, be aware that essentially all non-linear methods require you to make an initial guess, and are sensitive to that guess. I'm leaving it out below (just pass it in as the p0 kwarg to curve_fit) because the default a, b, c = 1, 1, 1 is a fairly accurate guess for a, b, c = 3, 2, 1.
With the caveats out of the way, curve_fit expects to be passed a function, a set of points where the observations were made (as a single ndim x npoints array), and the observed values.
So, if we write the function like this:
def func(x, y, z, a, b, c):
f = (a**2
+ x * b**2
+ y * a * b * np.cos(c)
+ z * a * b * np.sin(c))
return f
We'll need to wrap it to accept slightly different arguments before passing it to curve_fit.
In a nutshell:
def nonlinear_invert(f, x, y, z):
def wrapped_func(observation_points, a, b, c):
x, y, z = observation_points
return func(x, y, z, a, b, c)
xdata = np.vstack([x, y, z])
model, cov = opt.curve_fit(wrapped_func, xdata, f)
return model
Stand-alone Example of the two methods:
To give you a full implementation, here's an example that
generates randomly distributed points to evaluate the function on,
evaluates the function on those points (using set model parameters),
adds noise to the results,
and then inverts for the model parameters using both the linear and non-linear methods described above.
import numpy as np
import scipy.optimize as opt
def main():
nobservations = 4
a, b, c = 3.0, 2.0, 1.0
f, x, y, z = generate_data(nobservations, a, b, c)
print 'Linear results (should be {}, {}, {}):'.format(a, b, c)
print linear_invert(f, x, y, z)
print 'Non-linear results (should be {}, {}, {}):'.format(a, b, c)
print nonlinear_invert(f, x, y, z)
def generate_data(nobservations, a, b, c, noise_level=0.01):
x, y, z = np.random.random((3, nobservations))
noise = noise_level * np.random.normal(0, noise_level, nobservations)
f = func(x, y, z, a, b, c) + noise
return f, x, y, z
def func(x, y, z, a, b, c):
f = (a**2
+ x * b**2
+ y * a * b * np.cos(c)
+ z * a * b * np.sin(c))
return f
def linear_invert(f, x, y, z):
G = np.vstack([np.ones_like(x), x, y, z]).T
m, _, _, _ = np.linalg.lstsq(G, f)
d, e, f, g = m
a = np.sqrt(d)
b = np.sqrt(e)
c = np.arctan2(g, f) # Note that `c` will be in radians, not degrees
return a, b, c
def nonlinear_invert(f, x, y, z):
# "curve_fit" expects the function to take a slightly different form...
def wrapped_func(observation_points, a, b, c):
x, y, z = observation_points
return func(x, y, z, a, b, c)
xdata = np.vstack([x, y, z])
model, cov = opt.curve_fit(wrapped_func, xdata, f)
return model
main()
You probably want to be using scipy's nonlinear solvers, they're really easy: http://docs.scipy.org/doc/scipy/reference/optimize.nonlin.html

Categories