In a sample python class function, I have one or more class items that have arbitrary type and constructor signatures that all have a single return value and one or more original parameters to the function. Additionally, I have the possibility of using the output of a given member object as the input to another member object:
class Blah(...):
def __init__(
def myfunc(param1, param2... param_n):
r1 = self.obj1(param1,...)
...
r_n = self.obj_n(param1,r1,...)
What I need to know is, is there a way to instrument python to track edges between input and output of each invocation of a given set of tracked objects?
For example, as in the above, the result would be a graph: (param1...) -> r1, and (param1,r1...) -> r_n
The actual edge direction doesn't matter so long as the input-output relationship is consitent.
You could trace the function, and create a mapping of every function call.
An example of this is pytorch's onnx export capability, which uses this technique. In addition, if that's not enough, you could probably resort to using the python debugger api or just instrument all items within a module by using the inspect module.
import inspect
inspect.getmembers(your_module, isfunction)
By creating a class and defining call with the kwargs convention, you can match the signature of any object or function that you wrap with it. Then, when you iterate on the members of a module, you can wrap and re-assign that member with some class instance that reads the function meta-data or dynamic type information (f.name or otherwise), you can then track the arguments (maintain names by some unique id generation scheme) and function names and just create a graph right out of them.
Related
After I learned about different data types I learned that once an object from a given type is created it has innate methods that can do 'things'.
Playing around, I noticed that, while some methods return a value, others make change to the original data stored.
Is there any specific term for these two types of methods and is there any intuition or logic as to which methods return a value and which make changes?
For example:
abc= "something"
defg= [12,34,11,45,132,1]
abc.capitalise() #this returns a value
defg.sort() #this changes the orignal list
Is there any specific term for these two types of methods
A method that changes an object's state (ie list.sort()) is usually called a "mutator" (it "mutates" the object). There's no general name for methods that return values - they could be "getters" (methods that take no arguments and return part of the object's state), alternative constructors (methods that are called on the class itself and provide an alternative way to construct an instance of the class), or just methods that take some arguments, do some computations based on both the arguments and the object's state and return a result, or actually just do anything (do some computation AND change the object's state AND return a value).
is there any intuition or logic as to which methods return a value and which make changes?
Some Python objects are immutable (strings, numerics, tuples etc) so when you're working on one of those types you know you won't have any mutator. Except for this special case, nope, you will have to check the doc. The only naming convention here is that methods whose name starts with "set_" and take one argument will change the object's state based on their argument (and most often return nothing) and that methods whose name starts with "get_" and take no arguments will return informations on the object's state and change nothing (you'll often see the formers named "setters" and the laters named "getters"), but like any convention it's only followed by those who follow it, IOW don't assume that because a method name starts with "get_" or "set_" it will indeed behave as expected.
Strings are immutable, so all libraries that do string manipulation will return a new string.
For the other types, you will have to refer to the library documentation.
Suppose I have a module PyFoo.py that has a function bar. I want bar to print all of the local variables associated with the namespace that called it.
For example:
#! /usr/bin/env python
import PyFoo as pf
var1 = 'hi'
print locals()
pf.bar()
The two last lines would give the same output. So far I've tried defining bar as such:
def bar(x=locals):
print x()
def bar(x=locals()):
print x
But neither works. The first ends up being what's local to bar's namespace (which I guess is because that's when it's evaluated), and the second is as if I passed in globals (which I assume is because it's evaluated during import).
Is there a way I can have the default value of argument x of bar be all variables in the namespace which called bar?
EDIT 2018-07-29:
As has been pointed out, what was given was an XY Problem; as such, I'll give the specifics.
The module I'm putting together will allow the user to create various objects that represent different aspects of a numerical problem (e.x. various topology definitions, boundary conditions, constitutive models, ect.) and define how any given object interacts with any other object(s). The idea is for the user to import the module, define the various model entities that they need, and then call a function which will take all objects passed to it, make needed adjustments to ensure capability between them, and then write out a file that represents the entire numerical problem as a text file.
The module has a function generate that accepts each of the various types of aspects of the numerical problem. The default value for all arguments is an empty list. If a non-empty list is passed, then generate will use those instances for generating the completed numerical problem. If an argument is an empty list, then I'd like it to take in all instances in the namespace that called generate (which I will then parse out the appropriate instances for the argument).
EDIT 2018-07-29:
Sorry for any lack of understanding on my part (I'm not that strong of a programmer), but I think I might understand what you're saying with respect to an instance being declared or registered.
From my limited understanding, could this be done by creating some sort of registry dataset (like a list or dict) in the module that will be created when the module is imported, and that all module classes take this registry object in by default. During class initialization self can be appended to said dataset, and then the genereate function will take the registry as a default value for one of the arguments?
There's no way you can do what you want directly.
locals just returns the local variables in whatever namespace it's called in. As you've seen, you have access to the namespace the function is defined in at the time of definition, and you have access to the namespace of the function itself from within the function, but you don't have access to any other namespaces.
You can do what you want indirectly… but it's almost certainly a bad idea. At least this smells like an XY problem, and whatever it is you're actually trying to do, there's probably a better way to do it.
But occasionally it is necessary, so in case you have one of those cases:
The main good reason to want to know the locals of your caller is for some kind of debugging or other introspection function. And the way to do introspection is almost always through the inspect library.
In this case, what you want to inspect is the interpreter call stack. The calling function will be the first frame on the call stack behind your function's own frame.
You can get the raw stack frame:
inspect.currentframe().f_back
… or you can get a FrameInfo representing it:
inspect.stack()[1]
As explained at the top of the inspect docs, a frame object's local namespace is available as:
frame.f_locals
Note that this has all the same caveats that apply to getting your own locals with locals: what you get isn't the live namespace, but a mapping that, even if it is mutable, can't be used to modify the namespace (or, worse in 2.x, one that may or may not modify the namespace, unpredictably), and that has all cell and free variables flattened into their values rather than their cell references.
Also, see the big warning in the docs about not keeping frame objects alive unnecessarily (or calling their clear method if you need to keep a snapshot but not all of the references, but I think that only exists in 3.x).
I'm currently working on a project relating to brownian motion, and trying to simulate some of it using Python (a language I'm admittedly very new at). Currently, my goal is to generate random numbers following a given probability density function. I've been trying to use the scipy library for it.
My current code looks like this:
>>> import scipy.stats as st
>>> class my_pdf(st.rv_continuous):
def _pdf(self,x,y):
return (1/math.sqrt(4*t*D*math.pi))*(math.exp(-((x^2)/(4*D*t))))*(1/math.sqrt(4*t*D*math.pi))*(math.exp(-((y^2)/(4*D*t))))
>>> def get_brown(a,b):
D,t = a,b
return my_pdf()
>>> get_brown(1,1)
<__main__.my_pdf object at 0x000000A66400A320>
All attempts at launching the get_brown function end up giving me these hexadecimals (always starting at 0x000000A66400A with only the last three digits changing, no matter what parameters I give for D and t). I'm not sure how to interpret that. All I want is to get random numbers following the given PDF; what do these hexadecimals mean?
The result you see is the memory address of the object you have created. Now you might ask: which object? Your method get_brown(int, int) calls return my_pdf() which creates an object of the class my_pdf and returns it. If you want to access the _pdf function of your class now and calculate the value of the pdf you can use this code:
get_brown(1,1)._pdf(x, y)
On the object you have just created you can also use all methods of the scipy.stats.rv_continous class, which you can find here.
For your situation you could also discard your current code and just use the normal distribution included in scipy as Brownian motion is mainly a Normal random process.
As noted, this is a memory location. Your function get_brown gets an instance of the my_pdf class, but doesn't evaluate the method inside that class.
What you probably want to do is call the _pdf method on that instance, rather than return the class itself.
def get_brown(a,b):
D,t = a,b # what is D,t for?
return my_pdf()_pdf(a,b)
I expect that the code you've posted is a simplification of what you're really doing, but functions don't need to be inside classes - so the _pdf function could live on it's own. Alternatively, you don't need to use the get_brown function - just instantiate the my_pdf class and call the calculation method.
I'm struggling with PyYAML docs to understand a probably easy thing.
I have a dictionary that maps string names to python objects:
lut = { 'bar_one': my_bar_one_obj,
'bar_two': my_bar_two_obj }
and I'd like to load a YAML file like this and map all "foo" nodes to my dictionary objects (the inverse, dumping, is not really necessary)
node1:
# ...
foo: "bar_one"
node2:
# ...
foo: "bar_two"
My first thought was to use add_constructor but I couldn't find a way to give it an extra kwarg. Maybe a custom loader?
PyYAML docs aren't really helpful or probably I'm looking for the wrong keywords...
I could accept using a custom tag like
node1:
# ...
foo: !footag "bar_one"
node2:
# ...
foo: !footag "bar_two"
But detecting just foo nodes would be nicer
You are not looking for the wrong keywords, this is not something any of the YAML parsers I know of was made to do. YAML parsers load a, possible complex, data structure that is self contained. And what you want to do is merge that self contained structure, during one of the parsing steps, into an already existing structure ( lut ). The parser is built to allow tweaking by providing alternative routines not by providing routines + data
There is no option for that built into PyYAML, i.e. there is no built-in way to tell the loader about lut that make PyYAML do something with it, and certainly not to attach key-value pairs (assuming that is what you mean with the nodes) as values to its keys.
Probably the easiest way of getting what you want is using some post process which takes lut and the data loaded from your YAML file (which is also a dict) and combine the two.
If you want to try and do this with add_constructor, then what you need to do is construct a class with a __call__ method, create an instance of the class with lut as argument and than pass that instance in as an alternative constructor):
class ConstructorWithLut:
def __init__(self, lut):
self._lut = lut
def __call__(self):
# the actual constructor routine added by add_constructor
constructor_with_lut(lut)
SomeConstructor.add_constructor('your_tag', constructor_with_lut)
In which you can replace 'your_tag' with u'tag:yaml.org,2002:map' if you want
your constructor to handle (all) normal dicts.
Another option is to do this during YAML loading, but once again you cannot just tweak the Loader, or one of its constituent components (the Constructor) as you normally hand in the class not an object. You need an object to be able to attach lut. So what you would to do is create your own constructor and your own loader that uses that constructor and then a load() replacement that instantiates your loader, attaches lut (by just adding it as a unique attribute, or by passing it in as a parameter and handing it on to your constructor).
Your constructor, which should be a subclass of one of the existing constructors, then has to have its own construct_mapping() that first calls the parent class' construct_mapping() and, before returning the result, inspects whether it could update that attribute to which lut has been assigned. You cannot do this based on looking at the keys of the dict for foo, because if you have such a key you don't have access to the parent node that you need to assign to lut. What you need to do is see if any of the values of the mapping is a dict that has a key name foo, and if it does the dictionary can be used to update lut based on the value associated with foo.
I would certainly first implement the post process stage using two routines:
def update_from_yaml(your_dict, yaml_data):
for node_key in yaml_data:
node_value = yaml_data[node_key]
map_value(your_dict, node_key, node_value)
def map_value(your_dict, key, value):
foo_val = value.get('foo')
if foo_val is None: # key foo not found
return
your_dict[foo_val] = value # or = {key: value}
I am not sure what you really mean with "assigning all foo nodes", the YAML data has no nodes at the top level, it only has keys and values. So you either assign that pair or only its value (a dict).
Once those two routines work satisfactory, you can try to implement the add_constructor or Loader based alternatives, in which you should be able to re-use at least map_value
If I have an instance of class A, is there anyway to know what arguments were used to instantiate that instance?
I looked up the inspect module, and there are tools that a sooo close, but not quite right. For instance, the inspect.getargvalues(frame) almost works, except you can only get the frame from within the class itself. I want to get these after-the-fact.
Ideally, what I want is:
instance_a = ClassA(arguments)
inspect.get_values_set_to_the_init(instance_a)
I don't want to have to save the arguments from within the init statement if avoidable.
I should say the reason I want this in case there is a completely different approach: I want to be able to recreate a 'replica' of the object by saving the arguments (using my imaginary function above), then instantiating a new object by passing in exactly the same arguments to init. Pickle, Shelve and Marshall don't work since my object is apparently unserializable.