While i am reading through SWIG Documentation i came through these lines..
C/C++ global variables are fully supported by SWIG. However, the underlying mechanism is somewhat different than you might expect due to the way that Python assignment works. When you type the following in Python
a = 3.4
"a" becomes a name for an object containing the value 3.4. If you later type
b = a
then "a" and "b" are both names for the object containing the value 3.4. Thus, there is only one object containing 3.4 and "a" and "b" are both names that refer to it. This is quite different than C where a variable name refers to a memory location in which a value is stored (and assignment copies data into that location). Because of this, there is no direct way to map variable assignment in C to variable assignment in Python.
To provide access to C global variables, SWIG creates a special object called `cvar' that is added to each SWIG generated module. Global variables are then accessed as attributes of this object.
My question is what is the need for implementing in the above way. Even though we implemented in the above mentioned way object attributes are also implemented as objects.
Please see the below python code snippet
a = 10
b = a
a is b
True
class sample:
pass
obj = sample()
obj.a = 10
obj.b = obj.a
obj.a is obj.b
True
Here in both the above cases object assignment happening in the same way
It's all about the fact that SWIG has to provide an interface to a library in C/C++ which acts differently.
Let us assume that instead of implementing a cvar object SWIG simply used PyInts etc. as attributes to the generated modules(which is what "normal" C-extensions do).
Then when, from python code, the user assigns a value to the variable a new PyInt object is assigned to that attribute but the original variable used by the library is unchanged, because the module object does not know that it has to modify the C-global variable when doing an assignment.
This means that, while from the python side the user will see the value change, the C library wouldn't be aware of the change because the memory location represented by the global variable didn't change its value.
In order to allow the user to set the values in a manner that is visible from the C/C+ library, SWIG had to define this cvar object, which, when performing assignments, assigns the value to the library's variable under the cover, i.e. it changes the contents of the memory location that contains the value of the global variable.
This is probably done providing an implementation of __setattr__ and __getattr__ or __getattribute__, so that cvar is able to override the behaviour of assignment to an attribute.
Related
This is kind of a high level question. I'm not sure what you'd do with code like this:
class Object(object):
pass
obj = Object
obj.a = lambda: None
obj.d = lambda: dict
setattr(obj.d, 'dictionary', {4,3,5})
setattr(obj.a, 'somefield', 'somevalue')
If I'm going to call obj.a.somefield, why would I use print? It feels redundant.
I simply can't see what programming strictly with setting attributes would be good for?
I could write an entire program with all of my variables in object classes.
First about your print question. Print is used more for debugging or for attributes that are an output from an object that gives you information when you create it.
For example, there might be an object that you create by passing it data and it finds all of the basic statistics information of that data. You could have it return a dictionary via a method and access the values from there or you could simply access it via an attribute, making the data more readable.
For your second part of your question about why you would want to use attributes in general, they're more for internally passing information from function to function in an object or for configuring an object. Python has different scopes that determine which information each function can access. All methods of an object can access that object's attributes, which allows you to avoid using external or global variables. That makes your object nice and self contained. Global variables are generally avoided at all costs, because they can get messy, so they are considered bad practice.
Taking that a step further, using setattr is a more sophisticated way of setting these attributes to make your code more readable. You could use a function to modify aspects of an object or you could "hide" the complexity inside your setattr so the user can use a higher level interface rather than getting bogged down in the specifics.
I was just wondering if variables or names were objects or if the value is the object, if you could explain it in-depth I would appreciate it.
As far as I understand it, the values are objects, and the variables are just labels you use to refer to existing objects.
Any value in python is an object, has it's type and attributes. If you want to see it for your self, try to invoke the function dir passing a constant, like:
dir(1)
you will see all attributes associated with the int object 1
Any variable in python can be assigned any object as it's value. You can look at the variable as something like a post it with a name inscribed in it. You can stick it to any object (your refrigerator, your TV, a chair), and it will still be the same label, but the object it's "naming" will be other. So, the variable is just a way to bind a name to an object.
Suppose I have a module PyFoo.py that has a function bar. I want bar to print all of the local variables associated with the namespace that called it.
For example:
#! /usr/bin/env python
import PyFoo as pf
var1 = 'hi'
print locals()
pf.bar()
The two last lines would give the same output. So far I've tried defining bar as such:
def bar(x=locals):
print x()
def bar(x=locals()):
print x
But neither works. The first ends up being what's local to bar's namespace (which I guess is because that's when it's evaluated), and the second is as if I passed in globals (which I assume is because it's evaluated during import).
Is there a way I can have the default value of argument x of bar be all variables in the namespace which called bar?
EDIT 2018-07-29:
As has been pointed out, what was given was an XY Problem; as such, I'll give the specifics.
The module I'm putting together will allow the user to create various objects that represent different aspects of a numerical problem (e.x. various topology definitions, boundary conditions, constitutive models, ect.) and define how any given object interacts with any other object(s). The idea is for the user to import the module, define the various model entities that they need, and then call a function which will take all objects passed to it, make needed adjustments to ensure capability between them, and then write out a file that represents the entire numerical problem as a text file.
The module has a function generate that accepts each of the various types of aspects of the numerical problem. The default value for all arguments is an empty list. If a non-empty list is passed, then generate will use those instances for generating the completed numerical problem. If an argument is an empty list, then I'd like it to take in all instances in the namespace that called generate (which I will then parse out the appropriate instances for the argument).
EDIT 2018-07-29:
Sorry for any lack of understanding on my part (I'm not that strong of a programmer), but I think I might understand what you're saying with respect to an instance being declared or registered.
From my limited understanding, could this be done by creating some sort of registry dataset (like a list or dict) in the module that will be created when the module is imported, and that all module classes take this registry object in by default. During class initialization self can be appended to said dataset, and then the genereate function will take the registry as a default value for one of the arguments?
There's no way you can do what you want directly.
locals just returns the local variables in whatever namespace it's called in. As you've seen, you have access to the namespace the function is defined in at the time of definition, and you have access to the namespace of the function itself from within the function, but you don't have access to any other namespaces.
You can do what you want indirectly… but it's almost certainly a bad idea. At least this smells like an XY problem, and whatever it is you're actually trying to do, there's probably a better way to do it.
But occasionally it is necessary, so in case you have one of those cases:
The main good reason to want to know the locals of your caller is for some kind of debugging or other introspection function. And the way to do introspection is almost always through the inspect library.
In this case, what you want to inspect is the interpreter call stack. The calling function will be the first frame on the call stack behind your function's own frame.
You can get the raw stack frame:
inspect.currentframe().f_back
… or you can get a FrameInfo representing it:
inspect.stack()[1]
As explained at the top of the inspect docs, a frame object's local namespace is available as:
frame.f_locals
Note that this has all the same caveats that apply to getting your own locals with locals: what you get isn't the live namespace, but a mapping that, even if it is mutable, can't be used to modify the namespace (or, worse in 2.x, one that may or may not modify the namespace, unpredictably), and that has all cell and free variables flattened into their values rather than their cell references.
Also, see the big warning in the docs about not keeping frame objects alive unnecessarily (or calling their clear method if you need to keep a snapshot but not all of the references, but I think that only exists in 3.x).
In a previous question yesterday, in comments, I came to know that in python __code__ atrribute of a function is mutable. Hence I can write code as following
def foo():
print "Hello"
def foo2():
print "Hello 2"
foo()
foo.__code__ = foo2.__code__
foo()
Output
Hello
Hello 2
I tried googling, but either because there is no information(I highly doubt this), or the keyword (__code__) is not easily searchable, I couldn't find a use case for this.
It doesn't seem like "because most things in Python are mutable" is a reasonable answer either, because other attributes of functions — __closure__ and __globals__ — are explicitly read-only (from Objects/funcobject.c):
static PyMemberDef func_memberlist[] = {
{"__closure__", T_OBJECT, OFF(func_closure),
RESTRICTED|READONLY},
{"__doc__", T_OBJECT, OFF(func_doc), PY_WRITE_RESTRICTED},
{"__globals__", T_OBJECT, OFF(func_globals),
RESTRICTED|READONLY},
{"__module__", T_OBJECT, OFF(func_module), PY_WRITE_RESTRICTED},
{NULL} /* Sentinel */
};
Why would __code__ be writable while other attributes are read-only?
The fact is, most things in Python are mutable. So the real question is, why are __closure__ and __globals__ not?
The answer initially appears simple. Both of these things are containers for variables which the function might need. The code object itself does not carry its closed-over and global variables around with it; it merely knows how to get them from the function. It grabs the actual values out of these two attributes when the function is called.
But the scopes themselves are mutable, so this answer is unsatisfying. We need to explain why modifying these things in particular would break stuff.
For __closure__, we can look to its structure. It is not a mapping, but a tuple of cells. It doesn't know the names of the closed-over variables. When the code object looks up a closed-over variable, it needs to know its position in the tuple; they match up one-to-one with co_freevars which is also read-only. And if the tuple is of the wrong size or not a tuple at all, this mechanism breaks down, probably violently (read: segfaults) if the underlying C code isn't expecting such a situation. Forcing the C code to check the type and size of the tuple is needless busy-work which can be eliminated by making the attribute read-only. If you try to replace __code__ with something taking a different number of free variables, you get an error, so the size is always right.
For __globals__, the explanation is less immediately obvious, but I'll speculate. The scope lookup mechanism expects to have access to the global namespace at all times. Indeed, the bytecode may be hard-coded to go straight to the global namespace, if the compiler can prove no other namespace will have a variable with a particular name. If the global namespace was suddenly None or some other non-mapping object, the C code could, once again, violently misbehave. Again, making the code perform needless type checks would be a waste of CPU cycles.
Another possibility is that (normally-declared) functions borrow a reference to the module's global namespace, and making the attribute writable would cause the reference count to get messed up. I could imagine this design, but I'm not really sure it's a great idea since functions can be constructed explicitly with objects whose lifetimes might be shorter than that of the owning module, and these would need to be special-cased.
I am aware that numeric values are immutable in python. I have also read how everything is an object in python. I just want to know if numeric types are also objects in python. Because if they are objects, then the variables are actually reference variables right? Does it mean that if I pass a number to a function and modify it inside a function, then two number objects with two references are created? Is there a concept of primitive data types in python?
Note: I too was thinking it as objects. But visualizing in python tutor says differnt:
http://www.pythontutor.com/visualize.html#mode=edit
def test(a):
a+=10
b=100
test(b)
Or is it a defect in the visualization tool?
Are numeric types objects?
>>> isinstance(1, object)
True
Apparently they are. :-).
Note that you might need to adjust your mental model of an object a little. It seems to me that you're thinking of object as something that is "mutable" -- that isn't the case. In reality, we need to think of python names as a reference to an object. That object may hold references to other objects.
name = something
Here, the right hand side is evaluated -- All the names are resolved into objects and the result of the expression (an object) is referenced by "name".
Ok, now lets consider what happens when you pass something to a function.
def foo(x):
x = 2
z = 3
foo(z)
print(z)
What do we expect to happen here? Well, first we create the function foo. Next, we create the object 3 and reference it by the name z. After that, we look up the value that z references and pass that value to foo. Upon entering foo, that value gets referenced by the (local) name x. We then create the object 2 and reference it by the local name x. Note, x has nothing to do with the global z -- They're independent references. Just because they were referencing the same object when you enter the function doesn't mean that they have to reference the function for all time. We can change what a name references at any point by using an assignment statement.
Note, your example with += may seem to complicate things, but you can think of a += 10 as a = a + 10 if it helps in this context. For more information on += check out: When is "i += x" different from "i = i + x" in Python?
Everything in Python is an object, and that includes the numbers. There are no "primitive" types, only built-in types.
Numbers, however, are immutable. When you perform an operation with a number, you are creating a new number object.