In Scala–since it is a functional programming language–I can sequentially iterate a function from a starting value to create an array of [f(initial), f( f(initial)), f( f( f(initial))), ...].
For example, if I want to predict the future temperature based on the current temperature, I can do something like this in Python:
import random as rnd
def estimateTemp( previousTemp):
# function to estimate the temperature, for simplicity assume it is as follows:
return( previousTemp * rnd.uniform(0.8, 1.2) + rnd.uniform(-1.0, 1.0))
Temperature = [0.0 for i in range(100)]
for i in range(1,100):
Temperature[i] = estimateTemp( Temperature[i-1] )
The problem with the previous code is that it uses for loop, requires predefined array for the temperature, and in many languages you can replace the for loop with an iterator. For example, in Scala you can easily do the previous example by using the iterate method to create a list:
val Temperature = List.iterate(0.0,100)( n =>
(n * (scala.util.Random.nextDouble()*0.4+0.8)) +
(scala.util.Random.nextDouble()*2-1)
)
Such an implementation is easy to follow and clearly written.
Python have implemented the itertools module to imitate some functional programming languages. Are there any methods in the itertools module which imitate the Scala iterate method?
You could turn your function into an infinite generator and take an appropriate slice:
import random as rnd
from itertools import islice
def estimateTemp(startTemp):
while 1:
yield startTemp
startTemp = (startTemp * rnd.uniform(0.8, 1.2) + rnd.uniform(-1.0, 1.0))
temperature = list(islice(estimateTemp(0.0), 0, 100))
An equivalent program can be produced by using itertools.accumulate-:
from itertools import accumulate
accumulate(range(0, 100), lambda x, y => estimateTemp(x))
So here we have an accumulator x that is updated, the y parameter (which is the next element of the iterable) is ignored. We use it as a way to iterate 100 times.
Unfortunately, itertools does not have this functionality built-in. Haskell and Scala both have this function, and it bothered me too. An itertools wrapper called Alakazam that I am developing has some additional helper functions, including the aforementioned iterate function.
Runnable example using Alakazam:
import random as rnd
import alakazam as zz
def estimateTemp(previousTemp):
return( previousTemp * rnd.uniform(0.8, 1.2) + rnd.uniform(-1.0, 1.0))
Temperature = zz.iterate(estimateTemp, 0.0).take(100).list()
print(Temperature)
Related
Im trying to speed up my python code by porting a bunch of my nested loops over to fortran and calling them as subroutines.
But alot of my loops call numpy, and special functions from scipy like bessel functions.
Before I try and use fortran I was wondering if it was possible to import scipy and numpy to my fortran subroutine and call the modules for bessel functions?
Else would I have to create the bessel function in fortran in order to use it?
Ideally, I would create some sort of subroutine that would optimize this code below. This is just a snippet of my entire project to give you an idea of what I'm trying to accomplish.
I understand that there are other practices I should implement to improve the speed, but for now I was investigating the benefits of calling fortran subroutines in my main python program.
for m in range(self.MaxNum_Eigen):
#looping throught the eigenvalues for the given maximum number of eigenvalues allotted
bm = self.beta[m]
#not sure
#*note: rprime = r. BUT tprime ~= t.
#K is a list of 31 elements for this particular case
K = (bm / math.sqrt( (self.H2**2) + (bm**2) ))*(math.sqrt(2) / self.b)*((scipy.special.jv(0, bm * self.r))/ (scipy.special.jv(0, bm * self.b))) # Kernel, K0(bm, r).
#initial condition
F = [37] * (self.n1)
# Integral transform of the initial condition
#Fbar = (np.trapz(self.r,self.r*K*F))
'''
matlab syntax trapz(X,Y), x ethier spacing or vector
matlab: trapz(r,r.*K.*F) trapz(X,Y)
python: np.trapz(self.r*K*F, self.r) trapz(Y,X)
'''
#*(np.trapz(self.r,self.r*K*F))
Fbar = np.ones((self.n1,self.n2))*(np.trapz(self.r*K*F, self.r))
#steady state condition: integral is in steady state
SS = np.zeros((sz[0],sz[1]))
coeff = 5000000*math.exp(-(10**3)) #defining value outside of loop with higher precision
for i in range(sz[0]):
for j in range(sz[1]):
'''
matlab reshape(Array, size1, size2) takes multiple arguments the item its resizeing and the new desired shape
create self variables and so we are not re-initializing them over and over agaian?
using generators? How to use generators
'''
s = np.reshape(tau[i,j,:],(1,n3))
# will be used for rprime and tprime in Ozisik solution.
[RR,TT] = np.meshgrid(self.r,s)
'''
##### ERROR DUE TO ROUNDING OF HEAT SOURCE ####
error in rounding 5000000*math.exp(-(10**3)) becomes zero
#log10(e−10000)=−10000∗(0.4342944819)=−4342.944819
#e−1000=10−4342.944819=10−4343100.05518=1.13548386531×10−4343
'''
#g = 5000000*math.exp(-(10**3)) #*(RR - self.c*TT)**2) #[W / m^2] heat source.
g = coeff * (RR - self.c*TT)**2
K = (bm/math.sqrt(self.H2**2 + bm**2))*(math.sqrt(2)/self.b)*((scipy.special.jv(0,bm*RR))/(scipy.special.jv(0,bm*self.b)))
#integral transform of heat source
gbar = np.trapz(RR*K*g, self.r, 2) #trapz(Y,X,dx (spacing) )
gbar = gbar.transpose()
#boundary condition. BE SURE TO WRITE IN TERMS OF s!!!
f2 = self.h2 * 37
A = (self.alpha/self.k)*gbar + ((self.alpha*self.b)/self.k2)*((bm/math.sqrt(self.H2**2 + bm**2))*(math.sqrt(2)/self.b)*((scipy.special.jv(0,bm*self.b))/(scipy.special.jv(0,bm*self.b))))*f2
#A is essentially a constant is this correct all the time?
#What does A represent
SS[i, j] = np.trapz(np.exp( (-self.alpha*bm**2)*(T[i,j] - s) )*A, s)
#INSIDE M loop
K = (bm / math.sqrt((self.H2 ** 2) + (bm ** 2)))*(math.sqrt(2) /self.b)*((scipy.special.jv(0, bm * R))/ (scipy.special.jv(0, bm * self.b)))
U[:,:, m] = np.exp(-self.alpha * bm ** 2 * T)* K* Fbar + K* SS
#print(['Eigenvalue ' num2str(m) ', found at time ' num2str(toc) ' seconds'])
Compilation of answers given in the comments
Answers specific to my code:
As vorticity mentioned my code in itself was not using the numpy, and scipy packages to the fullest extent.
In regards to Bessel, function 'royvib' mentions using using .jo from scipy rather than .jv. Calling the special Bessel function jv. is much more computationally expensive, especially since I knew that I would be using a zeroth order bessel function for many of my declarations the minor change from jv -> j0 solved speed up the process.
In addition, I declared variables outside the loop to prevent expensive calls to searching for my appropriate functions. Example below.
Before
for i in range(SomeLength):
some_var = scipy.special.jv(1,coeff)
After
Bessel = scipy.special.jv
for i in range(SomeLength):
some_var = Bessel(1,coeff)
Storing the function saved time by not using the dot ('.') the command to look through the libraries every single loop. However keep in mind this does make python less readable, which is the main reason I choose to do this project in python. I do not have an exact amount of time this step cut from my process.
Fortran specific:
Since I was able to improve my python code I did not go this route an lack of specifics, but the general answer as stated by 'High Performance Mark' is that yes there are libraries that have been made to handle Bessel functions in Fortran.
If I do port my code over to Fortran or use f2py to mix Fortran and python I will update this answer accordingly.
I'm looking for some help understanding best practices regarding dictionaries in Python.
I have an example below:
def convert_to_celsius(temp, source):
conversion_dict = {
'kelvin': temp - 273.15,
'romer': (temp - 7.5) * 40 / 21
}
return conversion_dict[source]
def convert_to_celsius_lambda(temp, source):
conversion_dict = {
'kelvin': lambda x: x - 273.15,
'romer': lambda x: (x - 7.5) * 40 / 21
}
return conversion_dict[source](temp)
Obviously, the two functions achieve the same goal, but via different means. Could someone help me understand the subtle difference between the two, and what the 'best' way to go on about this would be?
If you have both dictionaries being created inside the function, then the former will be more efficient - although the former performs two calculations when only one is needed, there is more overhead in the latter version for creating the lambdas each time it's called:
>>> import timeit
>>> setup = "from __main__ import convert_to_celsius, convert_to_celsius_lambda, convert_to_celsius_lambda_once"
>>> timeit.timeit("convert_to_celsius(100, 'kelvin')", setup=setup)
0.5716437913429102
>>> timeit.timeit("convert_to_celsius_lambda(100, 'kelvin')", setup=setup)
0.6484164544288618
However, if you move the dictionary of lambdas outside the function:
CONVERSION_DICT = {
'kelvin': lambda x: x - 273.15,
'romer': lambda x: (x - 7.5) * 40 / 21
}
def convert_to_celsius_lambda_once(temp, source):
return CONVERSION_DICT[source](temp)
then the latter is more efficient, as the lambda objects are only created once, and the function only does the necessary calculation on each call:
>>> timeit.timeit("convert_to_celsius_lambda_once(100, 'kelvin')", setup=setup)
0.3904035060131186
Note that this will only be a benefit where the function is being called a lot (in this case, 1,000,000 times), so that the overhead of creating the two lambda function objects is less than the time wasted in calculating two results when only one is needed.
The dictionary is totally pointless, since you need to re-create it on each call but all you ever do is a single look-up. Juse use an if:
def convert_to_celsius(temp, source):
if source == "kelvin": return temp - 273.15
elif source == "romer": return (temp - 7.5) * 40 / 21
raise KeyError("unknown temperature source '%s'" % source)
Even though both achieve the same thing, the first part is more readable and faster.
In your first example you have a simple arithmetical operation which is going to be calculated once convert_to_celsius is called.
In the second example you calculate only the required temperature.
If you had the second function do an expensive calculation, then it would probably make sense to use a function instead, but for this particular example it's not required.
As others have pointed out, neither of your options are ideal. The first one does both calculations every time and has an unnecessary dict. The second one has to create the lambdas every time through. If this example is the goal then I agree with unwind to just use an if statement. If the goal is to learn something that can be expanded to other uses, I like this approach:
convert_to_celsius = { 'kelvin' : lambda temp: temp - 273.15 ,
'romer' : lambda temp: (temp-7.5) * 40 / 21}
newtemp = convert_to_celsius[source](temp)
Your calculation defintions are all stored together and your function call is uncluttered and meaningful.
In Python, I'm trying to write an algorithm alias_freq(f_signal,f_sample,n), which behaves as follows:
def alias_freq(f_signal,f_sample,n):
f_Nyquist=f_sample/2.0
if f_signal<=f_Nyquist:
return n'th frequency higher than f_signal that will alias to f_signal
else:
return frequency (lower than f_Nyquist) that f_signal will alias to
The following is code that I have been using to test the above function (f_signal, f_sample, and n below are chosen arbitrarily just to fill out the code)
import numpy as np
import matplotlib.pyplot as plt
t=np.linspace(0,2*np.pi,500)
f_signal=10.0
y1=np.sin(f_signal*t)
plt.plot(t,y1)
f_sample=13.0
t_sample=np.linspace(0,int(f_sample)*(2*np.pi/f_sample),f_sample)
y_sample=np.sin(f_signal*t_sample)
plt.scatter(t_sample,y_sample)
n=2
f_alias=alias_freq(f_signal,f_sample,n)
y_alias=np.sin(f_alias*t)
plt.plot(t,y_alias)
plt.xlim(xmin=-.1,xmax=2*np.pi+.1)
plt.show()
My thinking is that if the function works properly, the plots of both y1 and y_alias will hit every scattered point from y_sample. So far I have been completely unsuccessful in getting either the if statement or the else statement in the function to do what I think it should, which makes me believe that either I don't understand aliasing nearly as well as I want to, or my test code is no good.
My questions are: Prelimarily, is the test code I'm using sound for what I'm trying to do? And primarily, what is the alias_freq function that I am looking for?
Also please note: If some Python package has a function just like this already built in, I'd love to hear about it - however, part of the reason I'm doing this is to give myself a device to understand phenomena like aliasing better, so I'd still like to see what my function should look like.
As far as I understood the question correctly, the frequency of the aliased signal is abs(sampling_rate * n - f_signal), where n is the closest integer multiple to f_signal.
Thus:
n = round(f_signal / float(f_sample))
f_alias = abs(f_sample * n - f_signal)
This should work for frequencies under and over Nyquist.
I figured out the answer to my and just realized that I forgot to post it here, sorry. Turns out it was something silly - Antii's answer is basically right, but the way I wrote the code I need a f_sample-1 in the alias_freq function, where I just had an f_sample. There's still a phase shift thing that happens sometimes, but just plugging in either 0 or pi for the phase shift has worked for me every time, I think it's just due to even or odd folding. The working function and test code is below.
import numpy as np
import matplotlib.pyplot as plt
#Given a sample frequency and a signal frequency, return frequency that signal frequency will be aliased to.
def alias_freq(f_signal,f_sample,n):
f_alias = np.abs((f_sample-1)*n - f_signal)
return f_alias
t=np.linspace(0,2*np.pi,500)
f_signal=13
y1=np.sin(f_signal*t)
plt.plot(t,y1)
f_sample=7
t_sample=np.linspace(0,int(f_sample)*(2*np.pi/f_sample),f_sample)
y_sample=np.sin((f_signal)*t_sample)
plt.scatter(t_sample,y_sample)
f_alias=alias_freq(f_signal,f_sample,3)
y_alias=np.sin(f_alias*t+np.pi)#Sometimes with phase shift, usually np.pi for integer f_signal and f_sample, sometimes without.
plt.plot(t,y_alias)
plt.xlim(xmin=-.1,xmax=2*np.pi+.1)
plt.show()
Here is a Python aliased frequency calculator based on numpy
def get_aliased_freq(f, fs):
"""
return aliased frequency of f sampled at fs
"""
import numpy as np
fn = fs / 2
if np.int(f / fn) % 2 == 0:
return f % fn
else:
return fn - (f % fn)
import numpy
def rtpairs(R,T):
for i in range(numpy.size(R)):
o=0.0
for j in range(T[i]):
o +=2*(numpy.pi)/T[i]
yield R[i],o
R=[0.0,0.1,0.2]
T=[1,10,20]
for r,t in genpolar.rtpairs(R,T):
plot(r*cos(t),r*sin(t),'bo')
This program is supposed to be a generator, but I would like to check if i'm doing the right thing by first asking it to return some values for pheta (see below)
import numpy as np
def rtpairs (R=None,T=None):
R = np.array(R)
T = np.array(T)
for i in range(np.size(R)):
pheta = 0.0
for j in range(T[i]):
pheta += (2*np.pi)/T[i]
return pheta
Then
I typed import omg as o in the prompt
x = [o.rtpairs(R=[0.0,0.1,0.2],T=[1,10,20])]
# I tried to collect all values generated by the loops
It turns out to give me only one value which is 2 pi ... I have a habit to check my codes in the half way through, Is there any way for me to get a list of angles by using the code above? I don't understand why I must use a generator structure to check (first one) , but I couldn't use normal loop method to check.
Normal loop e.g.
x=[i for i in range(10)]
x=[0,1,2,3,4,5,6,7,8,9]
Here I can see a list of values I should get.
return pheta
You switched to return instead of yield. It isn't a generator any more; it's stopping at the first return. Change it back.
x = [o.rtpairs(R=[0.0,0.1,0.2],T=[1,10,20])]
This wraps the rtpairs return value in a 1-element list. That's not what you want. If you want to extract all elements from a generator and store them in a list, call list on the generator:
x = list(o.rtpairs(R=[0.0,0.1,0.2],T=[1,10,20]))
I am trying to define several functions on, let say, one variable.
First I defined x variable:
from sympy import *
x=var('x')
I want to define series of functions using Lambda something look like this:
f0=Lambda(x,x)
f1=Lambda(x,x**2)
....
fn=....
How can I define this?
Thanks
It took me a while to understand what your after. It seems that you are most likely after loops, but do not articulate this. So I would suggest you do this:
from sympy import *
x = symbols('x')
f=[]
for i in range(1,11): # generate 1 to 10
f.append( Lambda(x,x**i) )
# then you use them like this
print( f[0](2) ) # indexes 0 based
print( f[1](2) )
# ,,, up to [9]
Anyway your not really clear in your question on what the progression should be.
EDIT: As for generating random functions here is one that example generates a polynomial with growing order and a random set of lower orders:
from random import randint
from sympy import *
x = symbols('x')
f=[]
for i in range(1,11):
fun = x**i
for j in range(i):
fun += randint(0,1)* x**j
f.append( Lambda(x,fun) )
# then you use them like this
# note I am using python 2.7 if you use 3.x modify to suit your need
print( map(str, f) ) #show them all
def call(me):
return me(2)
print( map(call, f) )
Your random procedure may be different as there are a infinite number of randoms available. Note its different each time you run the creation loop it, use random seed to fix the random if needed same generation between runs. The functions once created are stable in one process.