i have some code that has this general structure:
def runSolver():
global a
global b
while (condition) :
# do some stuff
recalculateIntermediateParameters()
# do more stuff
return result
def recalculateIntermediateParameters():
# read globals
# calculate
return recalculatedParameter
# main
if __name__="__main__":
runSolver()
I'm wondering if this is the best implementation. For instance is it bad practice to have the globals declared in a function? I know in Java, a global variable is best declared outside any function definitions.
My thought was that this would be syntactically "better":
def runSolver():
a = foo
b = bar
# do some stuff
return result
def recalculateIntermediateParameters(a, b):
# do some stuff
return recalculatedParameter
But what if a and b are only read not manipulated by the function? Does that affect global placement? Further, what if a and b are lengthy lists? Does it make sense in a performance perspective to pass the values from function to function? Which paradigm offers the best compromise between "pythonic" code and performance?
You only need to state an identifier is global when you are changing it in a function. If you only reference a and b and not assign to them, you can omit the global statements.
Functions are globals too, for example. recalculateIntermediateParameters is not being assigned to, only referenced, and you didn't need to use global recalculateIntermediateParameters to be able to do so.
Python uses different opcodes to assign to locals versus globals. If you do:
def foo():
a = 10
versus
def foo():
global a
a = 10
Python uses two different operations to update a as a local or a global.
More generally speaking, you want to avoid using globals to pass around state. Pass values as parameters (using primitives, or a compound structure such as a dictionary or a custom class instance) around instead. It makes your code easier to read and debug as you can trace the state through the functions instead of having to keep track of a global separately.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I used to be a c programmer, so we have to pass every variable as argument or pointer and not encouraged to define global variable.
I am going to use some variable in several functions in python.
Generally, which is better, pass the variable as an argument, or define a self variable when we get the value of the variables? Does python has any general rules about this?
Like this:
class A:
def func2(self, var):
print var
def func1(self):
var = 1
self.func2(var)
class B:
def func2(self):
print self.var
def func1(self):
self.var = 1
self.func2()
Which is better? A or B?
In Python, you have a lot of freedom to do what "makes sense". In this case, I would say that it depends on how you plan on using func2 and who will be accessing it. If func2 is only ever supposed to act upon self.var, then you should code it as such. If other objects are going to need to pass in different arguments to func2, then you should allow for it to be an argument. Of course, this all depends on the larger scope of what you're trying to do, but given your simple example, this makes sense.
Also, I'm confused about how your question relates to global variables. Member variables are not the same thing as global variables.
Edited to reflect updated post:
The difference between A and B in your example is that B persists the information about self.var, while A does not. If var needs to be persisted as part of the object's state, then you need to store it as part of self. I get the sense that your question might relate more to objects as a general concept than anything Python-specific.
Of course it's better to design your program to use scope intelligently. The most obvious problem is that a mutation of a global variable can affect distant parts of code in ways that are difficult to trace, but in addition, garbage collection (reference counting, whatever) becomes effectively moot when your references live in long-lived scopes.
That said, Python has a global keyword, but it doesn't have globals in the same way c does. Python globals are module level, so they're namespaced with the module by default. The downstream programmer can bypass or alias this namespacing, but that's his/her problem. There are certainly cases where defining a module-level configuration value, pseudo-enum or -const makes sense.
Next, consider whether you need to maintain state: if the behavior of an object depends on it being aware of a certain value, make it a property. You can do that by attaching the value to self. Otherwise, just pass the value as an argument. (But then, if you have a lot of methods and no state, ask yourself if you really need a class, or should they just be module functions?)
This questions has implications towards object-oriented design. Python is an object oriented language; c is not. You would be dramatically undermining (and in some cases thwarting) object oriented advantages to use in-out programming or entirely global variables in Python except where there's particular reason to do so.
Consider the following reasons, which are not exhaustive:
Garbage collection won't know when to collect if the variables are all global
You no longer have fields (which is what "self" helps you reference). Say your object is a Cat; there isn't some global name for a cat which you reassign whenever a new Cat appears in your neighborhood. Rather, each cat has its own name, age, size, etc. Someone who wants to find out how big the cat is shouldn't have to go to some global repository of cat sizes and look it up, they should just look at the cat
You can run into problems with primitives because Python, unlike C, does not let you track (easily) the reference of an object. If I pass in an integer variable, I can't change the value of the variable in its original location, only within the scope of the function. This can be solved with global variables, but only by being very messy. Consider the following code:
def foo(x):
x = 3
myVar = 5
foo(myVar)
print(myVar)
This will, of course, output 5, not three. There is no "x*" like there is in C, so solving this would be rather tricky in Python if we wanted foo to reassign 3 to the input variable. Rather, we could write
class Foo:
x = 5
def foo( fooObj ):
fooObj.x = 3
myFoo = Foo()
foo(myFoo)
print(myFoo.x)
Problem solved - it now outputs 3, not 5!
As a general rule, it is better to use self whenever possible, to encapsulate internal information and bind it to object (or class). I may be helpful to explain how self and classes work.
In Python class or object variables are passed to methods explicitly, just as you would do in C if you want to do OOP. This is different from other object oriented languages, like Java or C++, where this argument is passed implicitly (but it always is!).
Thus if you define class like:
Class B(object):
def __init__(self, var=None): # this is constructor
self.var = var
def func2(self):
print self.var
when you call object method with . operator, this object will be passed as the first argument, that maps to self in method signature:
b = B(1) # object b is created and B.__init__(b, 1) is called
b.func2() # B.func2(b) is called, outputs 1
I hope this clears up things for you a bit
I recommend focusing on proximity. If the variable only relates to the current method or is created to be passed to another method, then it probably isn't expressing the persistent state of the class instance. Create the variable and throw it away when you're done.
If the variable describes an important facet of the instance, use self. This is not encroaching on your aversion to global variables as the variable is encapsulated within the instance. Class and module variables are also fine for the same reason.
In short, both A and B are proper implementations depending on context. I'm sorry that I haven't given you a clear answer but it has more to do with how important an object is to the objects around it than maintaining any sort of community standard. That you asked the question makes me think you'll make a reasonable judgement.
What I am trying to do, is creating a module, with a class; and a function, which is an interface of that class; and a variable name on-the-fly in this function, which is pointing to an instance of that class. This function and the class itself should be in a separate module, and their usage should be in a different python file.
I think, it's much easier to understand what I am trying to do, when you are looking at my code:
This is the first.py:
class FirstClass:
def setID(self, _id):
self.id = _id
def func(self):
pass
# An 'interface' for FirstClass
def fst(ID):
globals()['%s' % ID] = FirstClass(ID)
return globals()['%s' % ID]
Now, if I'm calling fst('some_text') right in first.py, the result is pretty much what I dreamed of, because later on, any time I write some_text.func(), it will call the func(), because some_text is pointing to an instance of FirstClass.
But, when the second.py is something like this:
from first import fst
fst('sample_name')
sample_name.func()
Then the answer from python is going to be like this:
NameError: name 'sample_name' is not defined.
Which is somewhat reasonable.. So my question is: is there a "prettier" method or a completely different one to do this? Or do I have to change something small in my code to get this done?
Thank you!
Don't set it as a global in the function. Instead, just return the new instance from the function and set the global to that return value:
def fst(ID):
return FirstClass(ID)
then in second.py:
sample_name = fst('sample_name')
where, if inside a function, you declare sample_name a global.
The globals() method only ever returns the globals of the module in which you call it. It'll never return the globals of whatever is calling the function. If you feel you need to have access to those globals, rethink your code, you rarely, if ever, need to alter the globals of whatever is calling your function.
If you are absolutely certain you need access to the caller globals, you need to start hacking with stack frames:
# retrieve caller globals
import sys
caller_globals = sys._getframe(1).f_globals
But, as the documentation of sys._getframe() states:
CPython implementation detail: This function should be used for internal and specialized purposes only. It is not guaranteed to exist in all implementations of Python.
Context: I'm making a Ren'py game. The value is Character(). Yes, I know this is a dumb idea outside of this context.
I need to create a variable from an input string inside of a class that exists outside of the class' scope:
class Test:
def __init__(self):
self.dict = {} # used elsewhere to give the inputs for the function below.
def create_global_var(self, variable, value):
# the equivalent of exec("global {0}; {0} = {1}".format(str(variable), str(value)))
# other functions in the class that require this.
Test().create_global_var("abc", "123") # hence abc = 123
I have tried vars()[], globals()[variable] = value, etc, and they simply do not work (they don't even define anything) Edit: this was my problem.
I know that the following would work equally as well, but I want the variables in the correct scope:
setattr(self.__class__, variable, value) # d.abc = 123, now. but incorrect scope.
How can I create a variable in the global scope from within a class, using a string as the variable name, without using attributes or exec in python?
And yes, i'll be sanity checking.
First things first: what we call the "global" scope in Python is actually the "module" scope
(on the good side, it diminishes the "evils" of using global vars).
Then, for creating a global var dynamically, although I still can't see why that would
be better than using a module-level dictionary, just do:
globals()[variable] = value
This creates a variable in the current module. If you need to create a module variable on the module from which the method was called, you can peek at the globals dictionary from the caller frame using:
from inspect import currentframe
currentframe(1).f_globals[variable] = name
Now, the this seems especially useless since you may create a variable with a dynamic name, but you can't access it dynamically (unless using the globals dictionary again)
Even in your test example, you create the "abc" variable passing the method a string, but then you have to access it by using a hardcoded "abc" - the language itself is designed to discourage this (hence the difference to Javascript, where array indexes and object attributes are interchangeable, while in Python you have distinct Mapping objects)
My suggestion is that you use a module-level explicit dictionary and create all your
dynamic variables as key/value pairs there:
names = {}
class Test(object):
def __init__(self):
self.dict = {} # used elsewhere to give the inputs for the function below.
def create_global_var(self, variable, value):
names[variable] = value
(on a side note, in Python 2 always inherit your classes from "object")
You can use setattr(__builtins__, 'abc', '123') for this.
Do mind you that this is most likely a design problem and you should rethink the design.
Both of these blocks of code work. Is there a "right" way to do this?
class Stuff:
def __init__(self, x = 0):
global globx
globx = x
def inc(self):
return globx + 1
myStuff = Stuff(3)
print myStuff.inc()
Prints "4"
class Stuff:
def __init__(self, x = 0):
self.x = x
def inc(self):
return self.x + 1
myStuff = Stuff(3)
print myStuff.inc()
Also prints "4"
I'm a noob, and I'm working with a lot of variables in a class. Started wondering why I was putting "self." in front of everything in sight.
Thanks for your help!
You should use the second way, then every instance has a separate x
If you use a global variable then you may find you get surprising results when you have more than one instance of Stuff as changing the value of one will affect all the others.
It's normal to have explicit self's all over your Python code. If you try tricks to avoid that you will be making your code difficult to read for other Python programmers (and potentially introducing extra bugs)
There are 2 ways for "class scope variables". One is to use self, this is called instance variable, each instance of a class has its own copy of instance variables; another one is to define variables in the class definition, this could be achieved by:
class Stuff:
globx = 0
def __init__(self, x = 0):
Stuff.globx = x
...
This is called class attribute, which could be accessed directly by Stuff.globx, and owned by the class, not the instances of the class, just like the static variables in Java.
you should never use global statement for a "class scope variable", because it is not. A variable declared as global is in the global scope, e.g. the namespace of the module in which the class is defined.
namespace and related concept is introduced in the Python tutorial here.
Those are very different semantically. self. means it's an instance variable, i.e. each instance has its own. This is propably the most common kind, but not the only one. And then there are class variables, defined at class level (and therefore by the time the class definition is executed) and accessable in class methods. The equivalent to most uses of static methods, and most propably what you want when you need to share stuff between instances (this is perfectly valid, although not automatically teh one and only way for a given problem). You propably want one of those, depending on what you're doing. Really, we can't read your mind and tell you which one fits your problem.
Globals variables are a different story. They're, well, global - everyone has the same one. This is almost never a good idea (for reasons explained on many occasions), but if you're just writing a quick and dirty script and need share something between several places, they can be acceptable.
I am using a list on which some functions works in my program. This is a shared list actually and all of my functions can edit it. Is it really necessary to define it as "global" in all the functions?
I mean putting the global keyword behind it in each function that uses it, or defining it outside of all the functions is enough without using the global word behind its definition?
When you assign a variable (x = ...), you are creating a variable in the current scope (e.g. local to the current function). If it happens to shadow a variable fron an outer (e.g. global) scope, well too bad - Python doesn't care (and that's a good thing). So you can't do this:
x = 0
def f():
x = 1
f()
print x #=>0
and expect 1. Instead, you need do declare that you intend to use the global x:
x = 0
def f():
global x
x = 1
f()
print x #=>1
But note that assignment of a variable is very different from method calls. You can always call methods on anything in scope - e.g. on variables that come from an outer (e.g. the global) scope because nothing local shadows them.
Also very important: Member assignment (x.name = ...), item assignment (collection[key] = ...), slice assignment (sliceable[start:end] = ...) and propably more are all method calls as well! And therefore you don't need global to change a global's members or call it methods (even when they mutate the object).
Yes, you need to use global foo if you are going to write to it.
foo = []
def bar():
global foo
...
foo = [1]
No, you can specify the list as a keyword argument to your function.
alist = []
def fn(alist=alist):
alist.append(1)
fn()
print alist # [1]
I'd say it's bad practice though. Kind of too hackish. If you really need to use a globally available singleton-like data structure, I'd use the module level variable approach, i.e.
put 'alist' in a module and then in your other modules import that variable:
In file foomodule.py:
alist = []
In file barmodule.py:
import foomodule
def fn():
foomodule.alist.append(1)
print foomodule.alist # [1]