Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 months ago.
Improve this question
The final step seen on Symbolab: is a conversion to decimal to get Radius = 2005.65151, which I'm not sure how to recreate or if there's a step in between.
The result I have so far (RadiusD) prints a fraction.
Image: Polygon formula where S = side length. N = num sides, r = radius
Error in this code:
import math
from fractions import Fraction, Decimal
def TestRadius():
HalfSideLen = 120/2
#60
edgeNum2Radians = math.radians(105) #Edge count
Radians = math.pi/edgeNum2Radians # correct so far
# 1.7142857142857142
Radius = HalfSideLen / math.sin(Radians)
RadiusD = Decimal(HalfSideLen / math.sin(Radians))
#1066491443117295/17592186044416
# wanting r = 2005.65151
print(RadiusD)
print(TestRadius())
My math is very poor, thanks for your help
Corrected by #ytung-dev. Somehow step 3 was returning a correct result so I didn't look too close at Step 2, where actually it was the error.
import math
def TestRadius():
HalfSideLen = 120/2
edgeNum = 105
Radians = math.pi/edgeNum
Radius = HalfSideLen / math.sin(Radians)
print(Radius)
print(TestRadius())
Try this:
import math
def fx(s,n):
return s / ( 2 * math.sin(math.pi/n) )
print(fx(120, 105))
# 2005.65
Few things to note:
math.sin() use radian
sin() in symbolab use degree
equation in your image use degree
180 deg = math.pi rad
What is wrong in your script is that edgeNum is a counting number not an angle, so you should not convert it to radian. The only degree-radian conversion you should handle is the 180 deg in the equation.
So, to make your equation work in python, you simply change the 180 deg in the equation to math.pi.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I am trying to find a solution to a problem, I have searched and the closest I can find is a post on interpolation from these forums which I have been able to achieve limited success.
I have multiple polygon's defined as separate scatter plots given by a number of x-y points. What I am attempting to do is draw a horizontal line on the y-axis and find x-min and x-max values at which the horizontal line intersects the polygon. I would like to do this for the full range of y-values. So in theory I could step through a loop and record the values at y=1, y=2 etc. An engineering software I am using requires input parameters in this format hence my attempt to find a solution.
Any advise or pointers on the best approach to this problem would be much appreciated and I will give it a go.
import matplotlib.pyplot as plt
import numpy as np
x =[1,2.6,2.56,2.57,10,11.66,13.07,11.78,11.27,6.49,5.98,5.76,3.02,1.87,1]
y =[15.59,15.09,15.14,15.15,16,17,25.47,26,27,27,28,28,26.67,16.37,15.59]
plt.plot(x,y)
plt.grid()
plt.show()
You can linearly interpolate between those two points, where the polygon crosses the line defined by ys==?. To find those points, you may substract the ys value from the y values of the polygon, find those points between which the sign changes and get the minimum and maximum of those points.
import numpy as np
import matplotlib.pyplot as plt
x =[1,2.6,2.56,2.57,10,11.66,13.07,11.78,11.27,6.49,5.98,5.76,3.02,1.87,1]
y =[15.59,15.09,15.14,15.15,16,17,25.47,26,27,27,28,28,26.67,16.37,15.59]
def findminmax(t, x, zero=0):
t = np.array(t); x = np.array(x)
ta = []
p = (x-zero) > 0
ti = np.where(np.bitwise_xor(p[1:], p[:-1]))[0]
for i in ti:
y_ = np.sort(x[i:i+2])
z_ = t[i:i+2][np.argsort(x[i:i+2])]
t_ = np.interp(zero, y_, z_)
ta.append( t_ )
if ta:
return min(ta), max(ta)
else:
return None, None
plt.plot(x,y)
ys = np.arange(13, 29, 0.2)
result = []
for s in ys:
mi, ma = findminmax(x,y,zero=s)
if mi and ma:
result.append([mi,ma,s])
print("y = {}, minimum {}, maximum {}".format(s,mi,ma))
result=np.array(result)
plt.scatter(result[:,0],result[:,2], label="min", color="limegreen")
plt.scatter(result[:,1],result[:,2], label="max", color="crimson")
plt.legend()
plt.grid()
plt.show()
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm trying to translate the matlab code below to a python code. The code calculates numerical the para-state of a deuterium molecule and then plots the result. When I try to translate it to python, it seems that I get stuck in a nested for-loop which calculates a sum. I have been searching on the internet the past days yet without success.
Because it's a physics code I will mention some aspect from the code. So first we calculate the partition function (Z). After that there is a calculation of the energy which is a partial derivative of ln(Z) to beta. From this we can calculate the specific heat (approximately) as the derivative of energy to temperature.
So the matlab code looks like this:
epsilon = 0.0038*1.60217662*10^-19;
k = 1.38*10^-23;
T = 1:.1:2000;
beta = 1./(k*T);
%partitionfunction
clear Z Zodd;
for i = 1:length(T)
clear p;
for s = 1:2:31;
a = 2*s+1;
b = s^2+s;
p(s) = 3*a*exp(-b*epsilon*beta(i));
end
Zodd(i) = sum(p);
end
%energy
ln_Zodd = log(Zodd);
for i = 1 : (length(T)-1)
Epara(i) = -(ln_Zodd(i+1)-ln_Zodd(i))/(beta(i+1)-beta(i));
end
%heat capacity
for i = 1 : (length(T)-2)
Cpara(i) = (Epara(i+1)-Epara(i))/(T(i+1)-T(i));
end
%plot
x = k*T/epsilon;
plot(x(1:6000),Cpara(1:6000)/k, 'r');
axis([0 7 0 1.5]);
ylabel('C_v/k');
xlabel('kT/eps');
The corresponding python code:
import numpy as np
import matplotlib.pyplot as plt
import math
epsilon=0.0038*1.60217662*10**-19
k = 1.38*10**-23
T = np.arange(1,2000,0.1)
beta = 1/(k*T)
#partitionfunction
for i in np.arange(1,len(T)):
for s in np.arange(1,31,2):
p[s] = 3*(2*s+1)*math.exp(-(s**2+s)*epsilon*beta(i))
Zodd[i] = sum(p)
#energy
ln_Zodd = math.log(Zodd)
for i in np.arange(1,(len(T) - 1)):
Epara[i]=- (ln_Zodd(i + 1) - ln_Zodd(i)) / (beta(i + 1) - beta(i))
#heat capacity
for i in np.arange(1,(len(T) - 2)):
Cpara[i]=(Epara(i + 1) - Epara(i)) / (T(i + 1) - T(i))
#plot
x = k*T/epsilon
plt.plot(x(np.arange(1,6000)),Cpara(np.arange(1,6000)) / k,'r')
plt.axis([0, 7, 0, 1.5])
plt.ylabel('C_v/k')
plt.xlabel('kT/eps')
plt.show()
This should be the easiest way to calculate (approximate) this problem because the analytic expression is way more involved. I'm new to python so any suggestions or corrections are appreciated.
I agree with #rayryeng that this question is off-topic. However, as I'm interested in matlab, python, and theoretical physics, I took the time to look through your code.
There are multiple syntactical problems with it, and multiple semantical ones as well. Arrays should always be accessed by [] in python, often you try to use (). And the natural indexing of arrays starts from 0, unlike matlab.
Here's a syntactically and semantically corrected version of your original code:
import numpy as np
import matplotlib.pyplot as plt
#import math #use np.* if you have it already imported
epsilon=0.0038*1.60217662*10**-19
k = 1.38*10**-23
T = np.arange(1,2000,0.1)
beta = 1.0/(k*T) #changed to 1.0 for safe measure; redundant
#partitionfunction
svec=np.arange(1,31,2)
p=np.zeros(max(svec)) #added pre-allocation
Zodd=np.zeros(len(T)) #added pre-allocation
for i in np.arange(len(T)): #changed to index Zodd from 0
for s in svec: #changed to avoid magic numbers
p[s-1] = 3*(2*s+1)*np.exp(-(s**2+s)*epsilon*beta[i]) #changed to index p from 0; changed beta(i) to beta[i]; changed to np.exp
Zodd[i] = sum(p)
#energy
ln_Zodd = np.log(Zodd) #changed to np.log
Epara=np.zeros(len(T)-2) #added pre-allocation
for i in np.arange(len(T) - 2): #changed to index Epara from 0
Epara[i]=- (ln_Zodd[i + 1] - ln_Zodd[i]) / (beta[i + 1] - beta[i]) #changed bunch of () to []
#heat capacity
Cpara=np.zeros(len(T)-3) #added pre-allocation
for i in np.arange(len(T) - 3): #changed to index Cpara from 0
Cpara[i]=(Epara[i + 1] - Epara[i]) / (T[i + 1] - T[i])
#plot
x = k*T/epsilon
plt.plot(x[:6000],Cpara[:6000] / k,'r') #fixed and simplified array indices
plt.axis([0, 7, 0, 1.5])
plt.ylabel('C_v/k')
plt.xlabel('kT/eps')
plt.show()
Take the time to look through the comments I made, they are there to instruct you. If something is not clear, please ask for clarification:)
However, this code is far from efficient. Especially your double loop takes a long time to run (which might explain why you think it hung). So I also made it very numpy-based.
Here's the result:
import numpy as np
import scipy.constants as consts
import matplotlib.pyplot as plt
epsilon=0.0038*consts.eV #changed eV
k = consts.k #changed
T = np.arange(1,2000,0.1)
beta = 1.0/(k*T) #changed to 1.0 for safe measure; redundant
#partitionfunction
s=np.arange(1,31,2)[:,None]
Zodd = (3*(2*s+1)*np.exp(-(s**2+s)*epsilon*beta)).sum(axis=0)
#energy
ln_Zodd = np.log(Zodd) #changed to np.log
#Epara = - (ln_Zodd[1:]-ln_Zodd[:-1])/(beta[1:]-beta[:-1]) #manual version
Epara = - np.diff(ln_Zodd)/np.diff(beta)
#heat capacity
Cpara=np.diff(Epara)/np.diff(T)[:-1]
#plot
x = k*T/epsilon
plt.plot(x[:len(Cpara)],Cpara / k,'r') #fixed and simplified array indices
plt.axis([0, 7, 0, 1.5])
plt.ylabel('C_v/k')
plt.xlabel('kT/eps')
plt.show()
Again, please review the changes made. I made use of the scipy.constants module to import physical constants to high precision. I also made use of array broadcasting, which allowed me to turn your double loop into a sum of a matrix along one of its dimensions (just like how you should have done it in matlab; your original matlab code is also far from efficient).
Here's the common result:
You can see that it seems right: at high temperature you get the Dulong--Petit behaviour, and at T->0 we get the zero limit in accordance with the third law of thermodynamics. The heat capacity decays exponentially, but this should make sense since you have a finite energy gap.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
i was wondering if there is a way to set the audio pitch. A certain tone would be the base. i want to know how to make the pitch go up or down. thanks.
also how do you play an audio tone. also if you know about any modules that do this, i would like to know them thanks.
my goal is to create a pong game that a blind person could play. the higher the ball is, the higher the pitch. the lower the ball, the lower the pitch. preferably in python. thanks in advance
If you want to try pyaudio library, then you can use this function piece of code I created some days ago!
import pyaudio
import struct
import math
SHRT_MAX=32767 # short uses 16 bits in complement 2
def my_sin(t,frequency):
radians = t * frequency * 2.0 * math.pi
pulse = math.sin(radians)
return pulse
#pulse_function creates numbers in [-1,1] interval
def generate(duration = 5,pulse_function = (lambda t: my_sin(t,1000))):
sample_width=2
sample_rate = 44100
sample_duration = 1.0/sample_rate
total_samples = int(sample_rate * duration)
p = pyaudio.PyAudio()
pformat = p.get_format_from_width(sample_width)
stream = p.open(format=pformat,channels=1,rate=sample_rate,output=True)
for n in range(total_samples):
t = n*sample_duration
pulse = int(SHRT_MAX*pulse_function(t))
data=struct.pack("h",pulse)
stream.write(data)
#example of a function I took from wikipedia.
major_chord = f = lambda t: (my_sin(t,440)+my_sin(t,550)+my_sin(t,660))/3
#choose any frequency you want
#choose amplitude from 0 to 1
def create_pulse_function(frequency=1000,amplitude=1):
return lambda t: amplitude * my_sin(t,frequency)
if __name__=="__main__":
# play fundamental sound at 1000Hz for 5 seconds at maximum intensity
f = create_pulse_function(1000,1)
generate(pulse_function=f)
# play fundamental sound at 500Hz for 5 seconds at maximum intensity
f = create_pulse_function(500,1)
generate(pulse_function=f)
# play fundamental sound at 500Hz for 5 seconds at 50% intensity
f = create_pulse_function(500,0.5)
generate(pulse_function=f)
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I have a binary file from which I have to read data. The file consists of a 128x128x243 matrix (hex-formatted) which I have read with the following code:
with open("zubal_voxel_man.dat", "rb") as fileHandle:
dim_x = 128
dim_y = 128
dim_z = 243
data = np.zeros((dim_x,dim_y,dim_z), dtype=np.int)
for p in range(0, dim_x):
for q in range (0, dim_y):
for r in range(0, dim_z):
data[p][q][r] = ord(fileHandle.read(1))
How do I visualize these data with Python? Each x,y,z position has a value from 0 to 255 (grey scale) which I would like to render.
Any help is greatly appreciated!
Part of your problem is with the code:
datax = data[:,0]
datay = data[:,1]
dataz = data[:,2]
Which is not doing what you are expecting of slicing in a single axis it is taking a slice of the Y=0 then of Y=1, Y=2 and plotting them against each other - your other issue is that you have a 3 dimensional array of values which gives each value 4 dimensions X, Y, Z, Value - and you are trying to plot these into a surface. which only has 3 dimensions.
I think that your first priority is to clarify your what your data represents and how it is structured.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am dealing with some kind of huge data set where I need to do binary classification using kernelized perceptron. I am using this source code: https://gist.github.com/mblondel/656147 .
Here there 3 things that can be paralelized , 1)kernel computation, 2)update rule 3)projection part. Also I did some kind of other speed up like calculation upper triangulated part of kernel then making it to full symmetric matrix :
K = np.zeros((n_samples, n_samples))
for index in itertools.combinations_with_replacement(range(n_samples),2):
K[index] = self.kernel(X[ index[0] ],X[ index[1] ],self.gamma)
#make the full KERNEL
K = K + np.triu(K,1).T
I also paralelized the projection part like:
def parallel_project(self,X):
""" Function to parallelizing prediction"""
y_predict=np.zeros(self.nOfWorkers,"object")
pool=mp.Pool(processes=self.nOfWorkers)
results=[pool.apply_async(prediction_worker,args=(self.alpha,self.sv_y,self.sv,self.kernel,(parts,))) for parts in np.array_split(X,self.nOfWorkers)]
pool.close()
pool.join()
i=0
for r in results:
y_predict[i]=r.get()
i+=1
return np.hstack(y_predict)
and worker:
def prediction_worker(alpha,sv_y,sv,kernel,samples):
""" WORKER FOR PARALELIZING PREDICTION PART"""
print "starting:" , mp.current_process().name
X= samples[0]
y_predict=np.zeros(len(X))
for i in range(len(X)):
s = 0
for a1, sv_y1, sv1 in zip(alpha, sv_y, sv):
s += a1 * sv_y1 * kernel(X[i], sv1)
y_predict[i]=s
return y_predict.flatten()
but still the code is too slow. So can you give me any hint regarding paralelization or any other speed up?
remark:
please prove general solution,I am not dealing with customize kernel functions.
thanks
Here's something that should give you an instant speedup. The kernels in Mathieu's example code take single samples, but then full Gram matrices are computed using them:
K = np.zeros((n_samples, n_samples))
for i in range(n_samples):
for j in range(n_samples):
K[i,j] = self.kernel(X[i], X[j])
This is slow, and can be avoided by vectorizing the kernel functions:
def linear_kernel(X, Y):
return np.dot(X, Y.T)
def polynomial_kernel(X, Y, p=3):
return (1 + np.dot(X, Y.T)) ** p
# the Gaussian RBF kernel is a bit trickier
Now the Gram matrix can be computed as just
K = kernel(X, X)
The project function should be changed accordingly to speed that up as well.