Python - Linear to Logarthmic Scale Conversion - python

Is there a way to convert number ranges?
I need to convert a linear range (0-1) to a logarithmic one (100*10^-12 - 1) so I can put a put a moveable horizontal line on a plotly plot (https://plotly.com/python/horizontal-vertical-shapes/#horizontal-and-vertical-lines-in-dash).
As far as I’m aware I can’t make my slider logarithmic to begin with (https://dash.plotly.com/dash-core-components/slider#non-linear-slider-and-updatemode).
I’ve tried normalizing. I’m not sure if that’s the right word, but basically putting my value into:
f(x) = log10(x * (max-min) + min)
Where:
x is the linear value being converted
max is the max of the log scale (1)
min is the min of the log scale (100*10^-12)
But f(.2) = .447 when I’m expecting 10*10^-9.
Is there a way accomplish this (or a better way to put a moveable horizontal line on the plot)?

BTW, 100*10^-12== 10^-10.
Seems you want to take logarithm of values at 10^-10..1 range to map them into 0..1 range and vice versa?
Y = A * log10(B * X)
substituting end values:
0 = A * log10(B * 10^-10) = A * (log10(B) - 10)
log10(B) = 10
B = 10^10
1 = A * log10(10^10 * 1) = A * 10
A = 0.1
So formula is
Y = 0.1 * log10(10^10 * X) =
1 + 0.1 * log10(X)
Reverse formula
10*Y = log10(10^10 * X)
10^(10*Y) = 10^10 * X
X = 10^(10*Y) * 10^-10 =
10^(10*Y-10)
using your example Y=0.2, we get X = 10^-8 as expected
from math import log10
for i in range(-10, 1):
X = 10**i
Y = 1 + 0.1 * log10(X)
print(Y)
print()
for i in range(0, 11):
Y = i / 10
X = 10**(10*Y-10)
print(X)

Related

How can I go about finding points where there is a bend/cut in my data?

I have the following data points: There are 5 sublists in this list of data. What I am trying to do is find the points where there is a maximum amount of curvature.
for i in range(len(smallest_5)):
x = [x for x,y in smallest_5[i]]
y = [y for x,y in smallest_5[i]]
plt.scatter(x,y)
plt.savefig('bend'+str(count)+'.png')
plt.show()
I've used this code to plot the points.
sub_curvature = []
for i in range(len(smallest_5)):
a = np.array(smallest_5[i])
dx_dt = np.gradient(a[:,0])
dy_dt = np.gradient(a[:,1])
velocity = np.array([ [dx_dt[i], dy_dt[i]] for i in range(dx_dt.size)])
ds_dt = np.sqrt(dx_dt * dx_dt + dy_dt * dy_dt)
tangent = np.array([1/ds_dt] * 2).transpose() * velocity
tangent_x = tangent[:, 0]
tangent_y = tangent[:, 1]
deriv_tangent_x = np.gradient(tangent_x)
deriv_tangent_y = np.gradient(tangent_y)
dT_dt = np.array([ [deriv_tangent_x[i], deriv_tangent_y[i]] for i in range(deriv_tangent_x.size)])
length_dT_dt = np.sqrt(deriv_tangent_x * deriv_tangent_x + deriv_tangent_y * deriv_tangent_y)
normal = np.array([1/length_dT_dt] * 2).transpose() * dT_dt
d2s_dt2 = np.gradient(ds_dt)
d2x_dt2 = np.gradient(dx_dt)
d2y_dt2 = np.gradient(dy_dt)
curvature = np.abs(d2x_dt2 * dy_dt - dx_dt * d2y_dt2) / (dx_dt * dx_dt + dy_dt * dy_dt)**1.5
t_component = np.array([d2s_dt2] * 2).transpose()
n_component = np.array([curvature * ds_dt * ds_dt] * 2).transpose()
acceleration = t_component * tangent + n_component * normal
sub_curvature.append(curvature)
I used the code above to calculate the curvature of individual points on the data.
Above are some of the graphs I created using the data. As you can see, the first one has no real bend but the last two have a point where there is a large bend. How could I go about identifying this region? Is it correct to calculate the curvature for individual points or should I look at the curvature over a sliding window of points? Thank you!
If we assume "curvature" to mean circular curvature, then you'll need a sliding window over 3 points (since 3 points determine a circle).
For any three points (a,b,c) the curvature is 2 * |(a-b) x (b-c)| / (|a-b| * |b-c| * |c-b|).
We can get a-b and b-c from
ab = smallest_5[1:] - smallest_5[:-1]
and a-c from:
ac = smallest_5[2:] - smallest_5[:-2]
Then the squared curvature is:
curv_sq = 4 * (np.cross(ab[1:], ab[:-1])**2).sum() / ((ab[1:]**2).sum() * (ab[:-1]**2).sum() * (ac**2).sum())
Since we're just looking for a maximum curvature, we don't actually have to take the square root of that. We can find the index of the point with maximum curvature with
max_curv_index = np.argmax(curv_sq)
As an idea, you can find the minimum y which is not the first or the last value in the y-dimension of the array. For example:
s4 = np.array(smallest_5[4]).T # exctract a sub-array
min_y = np.agrmin(s4[1]) # gives 13
min_y == (0 or len(s4[1]-1) # gives False, so the minimum is in the middle of the curve
s0 = np.array(smallest_5[0]).T # exctract a sub-array
min_y = np.agrmin(s0[1]) # gives 16
min_y == (0 or len(s0[1]-1) # gives True, so the minimum is not in the middle of the curve

How can I use very small numbers in Python with scipy?

I am trying to simulate a physics situation that involves calculating very small numbers. When the numbers get too small the values become garbage and/or are rounded to zero which doesn't help me. I am also using the scipy constants module for certain constants. I am trying to calculate the position using Euler's Method and calculating the velocity using momentum. The physics isn't the important part in this problem.
I have tried using the decimal module but think I am running into problems when using decimal and scipy constants together. Also, when using Decimal, do i need to convert every variable into Decimal before calculating?
In the loop below, it can only compute about 3 values before the error occurs.
# Create the arrays for velocity and position
vx = sp.zeros(n+1)
vy = sp.zeros(n+1)
x = sp.zeros(n+1)
y = sp.zeros(n+1)
time = sp.zeros(n+1)
# Initialize our values
vx[0] = vx0
vy[0] = vy0
x[0] = x0
y[0] = y0
time[0] = 0
i = 0
while y[i] > 0:
step = math.sqrt(x[i]**2 + y[i]**2) / cs.c
vx[i + 1] = vx[i] + (((cs.hbar * cs.c) / (2*cs.electron_mass)) * (x[i] / (x[i]**2 + y[i]**2)))
vy[i + 1] = vy[i] + (((cs.hbar * cs.c) / (2*cs.electron_mass)) * (y[i] / (x[i]**2 + y[i]**2)))
x[i + 1] = x[i] + (vx[i] * step)
y[i + 1] = y[i] + (vy[i] * step)
i += 1
RuntimeWarning: invalid value encountered in double_scalars
First and foremost, I'd start with using natural units , with hbar=c=m=1. Then re-evaluate if underflow persists or not.

Generating random numbers a, b, c such that a^2 + b^2 + c^2 = 1

To do some simulations in Python, I'm trying to generate numbers a,b,c such that a^2 + b^2 + c^2 = 1. I think generating some a between 0 and 1, then some b between 0 and sqrt(1 - a^2), and then c = sqrt(1 - a^2 - b^2) would work.
Floating point values are fine, the sum of squares should be close to 1. I want to keep generating them for some iterations.
Being new to Python, I'm not really sure how to do this. Negatives are allowed.
Edit: Thanks a lot for the answers!
According to this answer at stats.stackexchange.com, you should use normally distributed values to get uniformly distributed values on a sphere. That would mean, you could do:
import numpy as np
abc = np.random.normal(size=3)
a,b,c = abc/np.sqrt(sum(abc**2))
Just in case your interested in the probability densities I decided to do a comparison between the different approaches:
import numpy as np
import random
import math
def MSeifert():
a = 1
b = 1
while a**2 + b**2 > 1: # discard any a and b whose sum of squares already exceeds 1
a = random.random()
b = random.random()
c = math.sqrt(1 - a**2 - b**2) # fixed c
return a, b, c
def VBB():
x = np.random.uniform(0,1,3) # random numbers in [0, 1)
x /= np.sqrt(x[0] ** 2 + x[1] ** 2 + x[2] ** 2)
return x[0], x[1], x[2]
def user3684792():
theta = random.uniform(0, 0.5*np.pi)
phi = random.uniform(0, 0.5*np.pi)
return np.sin(theta)* np.cos(phi), np.sin(theta)*np.sin(phi), np.cos(theta)
def JohanL():
abc = np.random.normal(size=3)
a,b,c = abc/np.sqrt(sum(abc**2))
return a, b, c
def SeverinPappadeux():
cos_th = 2.0*random.uniform(0, 1.0) - 1.0
sin_th = math.sqrt(1.0 - cos_th*cos_th)
phi = random.uniform(0, 2.0*math.pi)
return sin_th * math.cos(phi), sin_th * math.sin(phi), cos_th
And plotting the distributions:
%matplotlib notebook
import matplotlib.pyplot as plt
f, axes = plt.subplots(3, 4)
for func_idx, func in enumerate([MSeifert, JohanL, user3684792, VBB]):
axes[0, func_idx].set_title(str(func.__name__))
res = [func() for _ in range(50000)]
for idx in range(3):
axes[idx, func_idx].hist([i[idx] for i in res], bins='auto')
axes[0, 0].set_ylabel('a')
axes[1, 0].set_ylabel('b')
axes[2, 0].set_ylabel('c')
plt.tight_layout()
With the result:
Explanation: The rows show the distributions for a, b and c respectively while the columns show the histograms (distributions) of the different approaches.
The only approaches that give a uniformly random distribution in the range (-1, 1) are JohanLs and Severin Pappadeux's approach. All other approaches have some features like spikes or a functional behavior in the range [0, 1). Note that these two solution currently gives values between -1 and 1 while all other approaches give values between 0 and 1.
I think it is actually a cool problem, and a nice way to do this is to just use spherical polar coordinates and generate the angles at random.
import random
import numpy as np
def random_pt():
theta = random.uniform(0, 0.5*np.pi)
phi = random.uniform(0, 0.5*np.pi)
return np.sin(theta)* np.cos(phi), np.sin(theta)*np.sin(phi), np.cos(theta)
You could do it like this:
import random
import math
def three_random_numbers_adding_to_one():
a = 1
b = 1
while a**2 + b**2 > 1: # discard any a and b whose sum of squares already exceeds 1
a = random.random()
b = random.random()
c = math.sqrt(1 - a**2 - b**2) # fixed c
return a, b, c
a, b, c = three_random_numbers_adding_to_one()
print(a**2 + b**2 + c**2)
However floats have only limited precision so these won't add to exactly 1, just approximately.
You may need to check if the numbers generated with this function are "random enough". It could be that this setup biases the "randomness".
The "right" answer depends on whether you are looking for a uniform random distribution in space, or on the surface of a sphere, or something else. If you are looking for points on the surface of a sphere, you still have to worry about the cos(theta) factor which will cause points to appear "bunched up" near the poles of the sphere. Since exact nature is not clear from your question, here is a "totally random" distribution that should work:
x = np.random.uniform(0,1,3) # random numbers in [0, 1)
x /= np.sqrt(x[0] ** 2 + x[1] ** 2 + x[2] ** 2)
Another advantage here is that since we are using numpy arrays, you can quickly scale to large sets of points too, by using x = np.random.uniform(0, 1, (3, n)) for any n.
Time to add another solution, heh...
This time it is truly uniform on the unit sphere point picking - check http://mathworld.wolfram.com/SpherePointPicking.html for details
import math
import random
def random_pt():
cos_th = 2.0*random.uniform(0, 1.0) - 1.0
sin_th = math.sqrt(1.0 - cos_th*cos_th)
phi = random.uniform(0, 2.0*math.pi)
return sin_th * math.cos(phi), sin_th * math.sin(phi), cos_th
for k in range(0, 100):
a, b, c = random_pt()
print("{0} {1} {2} {3}".format(a, b, c, a*a + b*b + c*c))

Python array manipulation, pi*[n+1]^2 - pi*[n]^2

I'm writing a script to subtract the inside cylinder from the outside cylinder for multiple cylinders.
for example: x = pi*[n+1]**2 - pi*[n]**2
However I'm not sure how to get n to change each time from for example 1 - 4, i want to be able to change n and have the code run through the new values without having to change everything.
x = pi*[1]**2 - pi*[0]**2
x = pi*[2]**2 - pi*[1]**2
x = pi*[3]**2 - pi*[2]**2
x = pi*[4]**2 - pi*[3]**2
I was trying to get a while loop to work but i cant figure out how to reference n without specifically stating which number in the array i want to reference.
Any help would be greatly appreciated.
rs = 0.2 # Radius of first cylinder
rc = 0.4 # Radius of each cylinder (concrete)
rg = 1 # Radius of each cylinder (soil)
BW = 3 # No. cylinders (concrete)
BG = 2 # No. cylinders (soil)
v1 = np.linspace(rs, rc, num=BW) # Cylinders (concrete)
v2 = np.linspace(rc * 1.5, rg, num=BG) # Cylinders (soil)
n = np.concatenate((v1, v2)) # Combined cylinders
for i in range(BW + BG):
x = np.pi * (n[i + 1] ** 2) - np.pi * (n[i] ** 2)
Try this:
for n in range(4): # 0 to 3
x = pi*[n+1]**2 - pi*[n]**2 #[1] - [0], [2] - [1] and so on...
# doSomething
If [n] is an index of an array with name num, replace [n] with
num[n] like so:
for n in range(4): # 0 to 3
x = pi*(num[n+1]**2) - pi*(num[n]**2) #[1] - [0], [2] - [1] and so on...
# doSomething
If instead it was simply n, replace [n] with n like so:
for n in range(4): # 0 to 3
x = pi*((n+1)**2) - pi*(n**2) #[1] - [0], [2] - [1] and so on...
# doSomething
Since your numbers are in a numpy array, it's much more efficient to use broadcast operations across the array (or slices of it), rather than writing a explicit loop and operating on individual items. This is the main reason to use numpy!
Try something like this:
# compute your `n` array as before
areas = pi * n**2 # this will be a new array with the area of each cylinder
area_differences = areas[1:] - areas[:-1] # differences in area between adjacent cylinders
How about this:
for i, value in enumerate(n[:-1]):
print(np.pi * (n[i + 1] ** 2) - np.pi * (value ** 2))
For me it prints:
0.157079632679
0.219911485751
0.628318530718
2.0106192983
Perhaps you want this:
>>> values = [np.pi * (n[i + 1] ** 2) - np.pi * (value ** 2)
for i, value in enumerate(n[:-1])]
>>> values
[0.15707963267948971, 0.2199114857512855, 0.62831853071795885, 2.0106192982974673]
Lets explain it:
we must get all elements in the list but the last, because n[i + 1] fails for the last item, so we use n[0:-1] (we are allowed omit the start of the slice if it is 0 or the end if it is equal or greater than len(n)).
enumerate(a_list) returns something resembling a list of pairs in the form
[(0, a_list[0]), (1, a_list[1]), ..., (n, a_list[n)]
for i, value in ... unpacks each pair into variables named i and value
[something for something in a_list] returns a new list. You may do calculations, and filter the values. For example, if you want a list of the square of the even integers bellow 10:
>>> [x * x for x in range(10) if x % 2 == 1]
[1, 9, 25, 49, 81]
I think this should provide the results you are looking for:
rs = 0.2 # Radius of first cylinder
rc = 0.4 # Radius of each cylinder (concrete)
rg = 1 # Radius of each cylinder (soil)
BW = 3 # No. cylinders (concrete)
BG = 2 # No. cylinders (soil)
v1 = np.linspace(rs, rc, num=BW) # Cylinders (concrete)
v2 = np.linspace(rc * 1.5, rg, num=BG) # Cylinders (soil)
n = np.concatenate((v1, v2))
results = []
for i, v in enumerate(n):
if i+1 < len(n):
results.append(pi * n[i+1] ** 2 - pi * v ** 2)
else:
break

Python double integral taking too long to compute

I am trying to compute the fresnel integral over a grid of coordinates using dblquad. But its taking very long and finally it's not giving any result.
Below is my code. In this code I integrated only over a 10 x 10 grid but I need to integrate at least over a 500 x 500 grid.
import time
st = time.time()
import pylab
import scipy.integrate as inte
import numpy as np
print 'imhere 0'
def sinIntegrand(y,x, X , Y):
a = 0.0001
R = 2e-3
z = 10e-3
Lambda = 0.5e-6
alpha = 0.01
k = np.pi * 2 / Lambda
return np.cos(k * (((x-R)**2)*a + (R-(x**2 + y**2)) * np.tan(np.radians(alpha)) + ((x - X)**2 + (y - Y)**2) / (2 * z)))
print 'im here 1'
def cosIntegrand(y,x,X,Y):
a = 0.0001
R = 2e-3
z = 10e-3
Lambda = 0.5e-6
alpha = 0.01
k = np.pi * 2 / Lambda
return np.sin(k * (((x-R)**2)*a + (R-(x**2 + y**2)) * np.tan(np.radians(alpha)) + ((x - X)**2 + (y - Y)**2) / (2 * z)))
def y1(x,R = 2e-3):
return (R**2 - x**2)**0.5
def y2(x, R = 2e-3):
return -1*(R**2 - x**2)**0.5
points = np.linspace(-1e-3,1e-3,10)
points2 = np.linspace(1e-3,-1e-3,10)
yv,xv = np.meshgrid(points , points2)
#def integrate_on_grid(func, lo, hi,y1,y2):
# """Returns a callable that can be evaluated on a grid."""
# return np.vectorize(lambda n,m: dblquad(func, lo, hi,y1,y2,(n,m))[0])
#
#intensity = abs(integrate_on_grid(sinIntegrand,-1e-3 ,1e-3,y1, y2)(yv,xv))**2 + abs(integrate_on_grid(cosIntegrand,-1e-3 ,1e-3,y1, y2)(yv,xv))**2
Intensity = []
print 'im here2'
for i in points:
row = []
for j in points2:
print 'im here'
intensity = abs(inte.dblquad(sinIntegrand,-1e-3 ,1e-3,y1, y2,(i,j))[0])**2 + abs(inte.dblquad(cosIntegrand,-1e-3 ,1e-3,y1, y2,(i,j))[0])**2
row.append(intensity)
Intensity.append(row)
Intensity = np.asarray(Intensity)
pylab.imshow(Intensity,cmap = 'gray')
pylab.show()
print str(time.time() - st)
I would really appreciate if you could tell any better way of doing this.
Using a scipy.integrate.dblquad to calculate every pixel of your image is going to be slow in any case.
You should try rewriting your mathematical problem so you can use some classical function in scipy.special instead. For instance, scipy.special.fresnel might work, although it is 1D and your problem seems to be in 2D. Otherwise, that there is a relationship between the Fresnel integral and the incomplete Gamma function (scipy.special.gammainc), if that helps.
If none of this work, as a last resort you can spend time optimizing your code and adapting it to Cython. This it will probably give a speed up of a factor of 10 to 100 (see this answer). Though this wouldn't be sufficient to go from a grid 10x10 to a grid 500x500.

Categories