Generating random numbers for a probability density function in Python - python

I'm currently working on a project relating to brownian motion, and trying to simulate some of it using Python (a language I'm admittedly very new at). Currently, my goal is to generate random numbers following a given probability density function. I've been trying to use the scipy library for it.
My current code looks like this:
>>> import scipy.stats as st
>>> class my_pdf(st.rv_continuous):
def _pdf(self,x,y):
return (1/math.sqrt(4*t*D*math.pi))*(math.exp(-((x^2)/(4*D*t))))*(1/math.sqrt(4*t*D*math.pi))*(math.exp(-((y^2)/(4*D*t))))
>>> def get_brown(a,b):
D,t = a,b
return my_pdf()
>>> get_brown(1,1)
<__main__.my_pdf object at 0x000000A66400A320>
All attempts at launching the get_brown function end up giving me these hexadecimals (always starting at 0x000000A66400A with only the last three digits changing, no matter what parameters I give for D and t). I'm not sure how to interpret that. All I want is to get random numbers following the given PDF; what do these hexadecimals mean?

The result you see is the memory address of the object you have created. Now you might ask: which object? Your method get_brown(int, int) calls return my_pdf() which creates an object of the class my_pdf and returns it. If you want to access the _pdf function of your class now and calculate the value of the pdf you can use this code:
get_brown(1,1)._pdf(x, y)
On the object you have just created you can also use all methods of the scipy.stats.rv_continous class, which you can find here.
For your situation you could also discard your current code and just use the normal distribution included in scipy as Brownian motion is mainly a Normal random process.

As noted, this is a memory location. Your function get_brown gets an instance of the my_pdf class, but doesn't evaluate the method inside that class.
What you probably want to do is call the _pdf method on that instance, rather than return the class itself.
def get_brown(a,b):
D,t = a,b # what is D,t for?
return my_pdf()_pdf(a,b)
I expect that the code you've posted is a simplification of what you're really doing, but functions don't need to be inside classes - so the _pdf function could live on it's own. Alternatively, you don't need to use the get_brown function - just instantiate the my_pdf class and call the calculation method.

Related

Attempting to use np.insert in a created class which has subscripts yields "object does not support item assignment" debug

I have defined my own class which takes in any matrix and is defined in such a way to convert this matrix into three numpy arrays inside a parenthesis (which I assume means it's a tuple). Furthermore, I have added a getitem method which allows output arrays to be subscript-able just like normal arrays.
My class is called MatrixConverter, and say x is some random matrix, then:
q=MatrixConverter(x)
Where q gives:
q=(array[1,2,3,4],array[5,6,7,8],array[9,10,11,12])
(Note that this is just an example, it does not produce three arrays with consecutive numbers)
Then, for example, by my getitem method, it allows for:
q[0] = array[1,2,3,4]
q[0][1] = 2
Now, I'm attempting to design a method to add en element into one of the arrays using the np.insert function such as the following:
class MatrixConverter
#some code here
def __change__(self,n,x):
self[1]=np.insert(self[1],n,x)
return self
Then, my desired output for the case where n=2 and x=70 is the following:
In:q.__change__(2,70)
Out:(array[1,2,3,4],array[5,6,70,7,8],array[9,10,11,12])
However, this gives me a TypeError: 'MatrixConverter' object does not support item assignment.
Any help/debugs? Should I perhaps use np.concentate instead?
Thank you!
Change your method to:
def __change__(self,n,x):
temp = np.insert(self[1],n,x)
self[1] = temp
return self
This will help you distinguish between a problem with the insert and a problem with the self[1] = ... setting.
I don't think the problem is with the insert call, but you need to write code that doesn't confuse you on such matters. That's a basic part of debugging.
Beyond that you haven't given us enough code to help you. For example what's the "getitem".
Expressions like array[1,2,3,4] tell me that you aren't actually copying from your code. That's not a valid Python expression, or array display.

How do I input a math function without Python running it?

I want to define a function which takes in a mathematical function as an input (such as np.sin(x) as an example) but I want it to essentially store it and have a random number generator put random numbers into it.
I know how to do it by directly inputting the code but I want to know how I can do it as a user which only sees the console.
So using np.sin(x) again,
def function(np.sin(x)):
x=random.uniform(0,1)
return value of np.sin(whatever the random number was)
Use the function as parameter, not the evaluated function:
def function(func):
x=random.uniform(0,1)
return func(x)
function(np.sin)
That said it is not clear what you want to achieve, but you'll get a different random output at each call. Also, for such a trivial case, you should better use directly the numpy function:
np.sin(random.uniform(0,1))

Python - how to create graph of variable assignment?

In a sample python class function, I have one or more class items that have arbitrary type and constructor signatures that all have a single return value and one or more original parameters to the function. Additionally, I have the possibility of using the output of a given member object as the input to another member object:
class Blah(...):
def __init__(
def myfunc(param1, param2... param_n):
r1 = self.obj1(param1,...)
...
r_n = self.obj_n(param1,r1,...)
What I need to know is, is there a way to instrument python to track edges between input and output of each invocation of a given set of tracked objects?
For example, as in the above, the result would be a graph: (param1...) -> r1, and (param1,r1...) -> r_n
The actual edge direction doesn't matter so long as the input-output relationship is consitent.
You could trace the function, and create a mapping of every function call.
An example of this is pytorch's onnx export capability, which uses this technique. In addition, if that's not enough, you could probably resort to using the python debugger api or just instrument all items within a module by using the inspect module.
import inspect
inspect.getmembers(your_module, isfunction)
By creating a class and defining call with the kwargs convention, you can match the signature of any object or function that you wrap with it. Then, when you iterate on the members of a module, you can wrap and re-assign that member with some class instance that reads the function meta-data or dynamic type information (f.name or otherwise), you can then track the arguments (maintain names by some unique id generation scheme) and function names and just create a graph right out of them.

Semantic Type Safety in Python

In my recent project I have the problem, that some values are often misinterpreted. For instance I calculate a wave as a sum of two waves (for which I need two amplitudes and two phase shifts), and then sample it at 4 points. I pass these tuples of four values to different functions, but sometimes I made the mistake to pass wave parameters instead of sample points.
These errors are hard to find, because all the calculations work without any error, but the values are totally meaningless in this context and so the results are just wrong.
What I want now is some kind of semantic type. I want to state that the one function returns sample points and the other function awaits sample points, and that I can do nothing that would conflict this declarations without immediately getting an error.
Is there any way to do this in python?
I would recommend implementing specific data types to be able to distinguish between different kind of information with the same structure.
You can simply subclass list for example and then do some type checking at runtime within your functions:
class WaveParameter(list):
pass
class Point(list):
pass
# you can use them just like lists
point = Point([1, 2, 3, 4])
wp = WaveParameter([5, 6])
# of course all methods from list are inherited
wp.append(7)
wp.append(8)
# let's check them
print(point)
print(wp)
# type checking examples
print isinstance(point, Point)
print isinstance(wp, Point)
print isinstance(point, WaveParameter)
print isinstance(wp, WaveParameter)
So you can include this kind of type checking in your functions, to make sure the correct kind of data was passed to it:
def example_function_with_waveparameter(data):
if not isinstance(data, WaveParameter):
log.error("received wrong parameter type (%s instead WaveParameter)" %
type(data))
# and then do the stuff
or simply assert:
def example_function_with_waveparameter(data):
assert(isinstance(data, WaveParameter))
Pyhon's notion of a "semantic type" is called a class, but as mentioned, Python is dynamically typed so even using custom classes instead of tuples you won't get any compile-time error - at best you'll get runtime errors if your classes are designed in such a way that trying to use one instead of the other will fail.
Now classes are not just about data, they are about behaviour too, so if you have functions that do waveform-specific computations these functions would probably become methods of the Waveform class, and idem for the Point part, and this might be enough to avoid logical errors like passing a "waveform" tuple to a function expecting a "point" tuple.
To make a long story short: if you want a statically typed functional language, Python is not the right tool (Haskell might be a better choice). If you really want / have to use Python, try using classes and methods instead of tuples and functions, it still won't detect type errors at compile-time but chances are you'll have less type errors AND that these type errors will be detected at runtime instead of producing wrong results.

Defining a global function in a Python script

I'm new to Python. I am writing a script that will numerically integrate a set of ordinary differential equations using a Runge-Kutta method. Since the Runge-Kutta method is a useful mathematical algorithm, I've put it in its own .py file, rk4.py.
def rk4(x,dt):
k1=diff(x)*dt
k2=diff(x+k1/2)*dt
k3=diff(x+k2/2)*dt
k4=diff(x+k3)*dt
return x+(k1+2*k2+2*k3+k4)/6
The method needs to know the set of equations that the user is working with in order to perform the algorithm, so it calls a function diff(x) that will find give rk4 the derivatives it needs to work. Since the equations will change by use, I want diff() to be defined in the script that will run the particular problem. In this case the problem is the orbit of mercury, so I wrote mercury.py. (This isn't how it will look in the end, but I've simplified it for the sake of figuring out what I'm doing.)
from rk4 import rk4
import numpy as np
def diff(x):
return x
def mercury(u0,phi0,dphi):
x=np.array([u0,phi0])
dt=2
x=rk4(x,dt)
return x
mercury(1,1,2)
When I run mercury.py, I get an error:
File "PATH/mercury.py", line 10, in mercury
x=rk4(x,dt)
File "PATH/rk4.py", line 2, in rk4
k1=diff(x)*dt
NameError: global name 'diff' is not defined
I take it since diff() is not a global function, when rk4 runs it knows nothing about diff. Obviously rk4 is a small piece of code and I could just shove it into whatever script I'm using at the time, but I think a Runge-Kutta integrator is a fundamental mathematical tool, just like the array defined in NumPy, and so it makes sense to make it a function that is called rather one that is defined in every script that uses it (which may be many). But I also can't go telling rk4.py to import a particular diff from a particular .py file, because that ruins the generality of rk4 that I want in the first place.
Is there a way to define diff globally within a script like mercury.py so that when rk4 is called, it will know about diff?
Accept the function as an argument:
def rk4(diff, # accept an argument of the function to call
x, dt)
k1=diff(x)*dt
k2=diff(x+k1/2)*dt
k3=diff(x+k2/2)*dt
k4=diff(x+k3)*dt
return x+(k1+2*k2+2*k3+k4)/6
Then, when you call rk4, simply pass in the function to be executed:
from rk4 import rk4
import numpy as np
def diff(x):
return x
def mercury(u0,phi0,dphi):
x=np.array([u0,phi0])
dt=2
x=rk4(diff, # here we send the function to rk4
x, dt)
return x
mercury(1,1,2)
It might be a good idea for mercury to accept diff as an argument too, rather than getting it from the closure (the surrounding code). You then have to pass it in as usual - your call to mercury in the last line would read mercury(diff, 1, 1, 2).
Functions are 'first-class citizens' in Python (as is nearly everything, including classes and modules), in the sense that they can be used as arguments, be held in lists, be assigned to names in namespaces, etc etc.
diff is already a global in the module mercury.py. But in order to use it in rk4.py you would need to import it like this:
from mercury import diff
That's the direct answer to your question.
However, passing the diff function to rk4 as suggested by #poorsod is much more elegant and also avoids a circular dependency between mercury.py and rk4.py, so I suggest you do it that way.

Categories