I want to check if the randint function is already seeded.
Should I manually seed it? If yes, how can I do that?
There is no need to explicitly seed, unless you have very specific requirements. The internal Random() instance is automatically seeded the first time you import the module.
From the module documentation:
The functions supplied by this module are actually bound methods of a hidden instance of the random.Random class.
and the Random.__init__():
class Random(_random.Random):
# ...
def __init__(self, x=None):
# ...
self.seed(x)
# ...
_inst = Random()
so the instance calls self.seed(None) when the module is created. None means 'best available source of a seed' (which may be time.time()), see random.seed():
If a is omitted or None, the current system time is used (together with the PID of the process). If randomness sources are provided by the operating system, they are used instead of the system time (see the os.urandom() function for details on availability).
Related
To use the random.randint method I can use the following two options:
import random
# one way to use random
random.randint(1,10)
# second way with instanciating first
instance = random.Random()
instance.randint(1,10)
Why can I use the first way although I did not instantiate an instance yet?
Importing random instantiates Random into a private variable.
On line 786 of random.py.
_inst = Random()
And then on line 791.
randint = _inst.randint
random.py resides in the Lib folder of your python installation if you want to check it out yourself.
Lines 786 - 808 are the lines of interest. They basically set all the methods of that private instance of random to variables so they can be called this way.
From the docs.
The functions supplied by this module are actually bound methods of a hidden instance of the random.Random class. You can instantiate your own instances of Random to get generators that don’t share state.
https://docs.python.org/3/library/random.html
I'm writing some code for an esp8266 micro controller using micro-python and it has some different class as well as some additional methods in the standard built in classes. To allow me to debug on my desktop I've built some helper classes so that the code will run. However I've run into a snag with micro-pythons time function which has a time.sleep_ms method since the standard time.sleep method on micropython does not accept floats. I tried using the following code to extend the built in time class but it fails to import properly. Any thoughts?
class time(time):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def sleep_ms(self, ms):
super().sleep(ms/1000)
This code exists in a file time.py. Secondly I know I'll have issues with having to import time.time that I would like to fix. I also realize I could call this something else and put traps for it in my micro controller code however I would like to avoid any special functions in what's loaded into the controller to save space and cycles.
You're not trying to override a class, you're trying to monkey-patch a module.
First off, if your module is named time.py, it will never be loaded in preference to the built-in time module. Truly built-in (as in compiled into the interpreter core, not just C extension modules that ship with CPython) modules are special, they are always loaded without checking sys.path, so you can't even attempt to shadow the time module, even if you wanted to (you generally don't, and doing so is incredibly ugly). In this case, the built-in time module shadows you; you can't import your module under the plain name time at all, because the built-in will be found without even looking at sys.path.
Secondly, assuming you use a different name and import it for the sole purpose of monkey-patching time (or do something terrible like adding the monkey patch to a custom sitecustomize module, it's not trivial to make the function truly native to the monkey-patched module (defining it in any normal way gives it a scope of the module where it was defined, not the same scope as other functions from the time module). If you don't need it to be "truly" defined as part of time, the simplest approach is just:
import time
def sleep_ms(ms):
return time.sleep(ms / 1000)
time.sleep_ms = sleep_ms
Of course, as mentioned, sleep_ms is still part of your module, and carries your module's scope around with it (that's why you do time.sleep, not just sleep; you could do from time import sleep to avoid qualifying it, but it's still a local alias that might not match time.sleep if someone else monkey-patches time.sleep later).
If you want to make it behave like it's part of the time module, so you can reference arbitrary things in time's namespace without qualification and always see the current function in time, you need to use eval to compile your code in time's scope:
import time
# Compile a string of the function's source to a code object that's not
# attached to any scope at all
# The filename argument is garbage, it's just for exception traceback
# reporting and the like
code = compile('def sleep_ms(ms): sleep(ms / 1000)', 'time.py', 'exec')
# eval the compiled code with a scope of the globals of the time module
# which both binds it to the time module's scope, and inserts the newly
# defined function directly into the time module's globals without
# defining it in your own module at all
eval(code, vars(time))
del code, time # May as well leave your monkey-patch module completely empty
I need to create a class which takes in a random number generator (i.e. a numpy.random.RandomState object) as a parameter. In the case this argument is not specified, I would like to assign it to the random generator that numpy uses when we run numpy.random.<random-method>. How do I access this global generator? Currently I am doing this by just assigning the module object as the random generator (since they share methods / duck typing). However this causes issues when pickling (unable to pickle module object) and deep-copying. I would like to use the RandomState object behind numpy.random
PS: I'm using python-3.4
As well as what kazemakase suggests, we can take advantage of the fact that module-level functions like numpy.random.random are really methods of a hidden numpy.random.RandomState by pulling the __self__ directly from one of those methods:
numpy_default_rng = numpy.random.random.__self__
numpy.random imports * from numpy.random.mtrand, which is an extension module written in Cython. The source code shows that the global state is stored in the variable _rand. This variable is not imported into the numpy.random scope but you can get it directly from mtrand.
import numpy as np
from numpy.random.mtrand import _rand as global_randstate
np.random.seed(42)
print(np.random.rand())
# 0.3745401188473625
np.random.RandomState().seed(42) # Different object, does not influence global state
print(np.random.rand())
# 0.9507143064099162
global_randstate.seed(42) # this changes the global state
print(np.random.rand())
# 0.3745401188473625
I don't know how to access the global state. However, you can use a RandomState object and pass it along. Random distributions are attached to it, so you call them as methods.
Example:
import numpy as np
def computation(parameter, rs):
return parameter*np.sum(rs.uniform(size=5)-0.5)
my_state = np.random.RandomState(seed=3)
print(computation(3, my_state))
I am writing a Python wrapper for a C library using the cffi.
The C library has to be initialized and shut down. Also, the cffi needs some place to save the state returned from ffi.dlopen().
I can see two paths here:
Either I wrap this whole stateful business in a class like this
class wrapper(object):
def __init__(self):
self.c = ffi.dlopen("mylibrary")
self.c.initialize()
def __del__(self):
self.c.terminate()
Or I provide two global functions that hide the state in a global variable
def initialize():
global __library
__library = ffi.dlopen("mylibrary")
__library.initialize()
def terminate():
__library.terminate()
del __library
The first path is somewhat cumbersome in that it requires the user to always create an object that really serves no other purpose other than managing the library state. On the other hand, it makes sure that terminate() is actually called every time.
The second path seems to result in a somewhat easier API. However, it exposes some hidden global state, which might be a bad thing. Also, if the user forgets to call terminate(), the C library is not unloaded correctly (which is not a big problem on the C side).
Which one of these paths would be more pythonic?
Exposing a wrapper object only makes sense in python if the library actually supports something like multiple instances in one application. If it doesn't support that or it's not really relevant go for kindall's suggestion and just initialize the library when imported and add an atexit handler for cleanup.
Adding wrappers around a stateless api or even an api without support for keeping different sets of state is not really pythonic and would raise expectations that different instances have some kind of isolation.
Example code:
import atexit
# Normal library initialization
__library = ffi.dlopen("mylibrary")
__library.initialize()
# Private library cleanup function
def __terminate():
__library.terminate()
# register function to be called on clean interpreter termination
atexit.register(__terminate)
For more details about atexit this question has some more details, as has the python documentation of course.
I manage a fairly large python-based quantum chemistry suite, PyQuante. I'm currently struggling with how to set various defaults so that users can choose among different options at runtime.
For example, I have three different methods for computing electron repulsion integrals. Let's call them a,b,c. I used to simply pick the one I liked best (say, c), and have that hard-wired into the module that computes these integrals.
I have now modified this to use a module, Defaults.py, that contains all such hard-wires. But this is set at compile/install time. I would now like users to be able to override these options at runtime, say, using a .pyquanterc.py file.
In my integral routines, I currently have something like
from Defaults import integral_method
I know about dictionaries, and the .update() method. But I don't know how I would use this in real life. My defaults module looks like
integral_method = c
should I modify the end of Defaults.py to look for a .pythonrc.py file and override these values? E.g.
if os.path.exists('$HOME/.pythonrc.py'): do_something
If so, what should do_something look like?
With your current setup, the user can change the default functions in his scripts quite easily:
import Defaults
Defaults.integral_method = somefunc
If the user adds this to his script, all your modules that use integral_method from Defaults will use somefunc to calculate integrals.
I might do this via a factory class.
class IntegralSolver:
"""
Factory class containing methods for solving integrals.
>>> solver = IntegralSolver("method1")
>>> solver(x)
# solution via method1
Can also be used directly:
>>> IntegralSolver.method2(x)
# solution via method2
"""
def __init__(self, method):
self.__call__ = getattr(self, method)
#staticmethod
def method1(x):
return method1_solution
#staticmethod
def method2(x):
return method2_solution
It really depends on how your user runs the toolset. If they twiddle the python code each time, just setting a block at the top labeled OPTIONS should be good. If they run it off the command line, use the argparse library to allow them to switch options on the command line. Perhaps have it read the options out of a file with configParser to read a default file with your options, and if the user sets it, an additional file with their options.