I have spent the past few hours reading around but I'm not really understanding what I am sure is a very basic concept: passing values (as variables) between different functions.
class BinSearch:
def __init__(self,length,leng,r,obj_function,middle):
self.length = length
self.leng = leng
self.r = r
self.obj_function = obj_function
self.middle = middle
self.objtobin(obj_function)
def BinarySearch(length,leng,r):
mid = np.arange(0,len(length),1)
middle = min(mid) + (max(mid)-min(mid))//2
L_size = []
L = length[middle]
L_size.append(L)
return L
def objtobin(self,obj_function):
# length,leng,middle = BinSearch.BinarySearch()
if (obj_function>=0.98):
return BinSearch.BinarySearch(self.length,min(leng),self.middle-1)
else:
return BinSearch.BinarySearch(self.length,self.middle+1,max(leng))
BinSearch.objtobin(obj_function=max(objectivelist))
When I run the above code, BinSearch.objtobin code gives "objtobin() missing 1 required positional argument: 'self'" What should I do for this error?
Thanks for help!
Firstly, thank you all for your help. But I do not understand how should I change this code
I have started modifying your code so that it would run without errors, but there are a few other mistakes in there as well, and I have not tried to make sense of all your parameters.
It would look something like this, but I will explain below.
# --- OP's attempt that fails ---
# BinSearch.objtobin(obj_function=max(objectivelist))
# -- -- -- -- -- -- -- -- -- -- --
# --- Using an instance ---
figure_this_out_yourself = 100
# this variable is a placeholder for any parameters I had to supply
myBinSearchInstance = BinSearch(
length = figure_this_out_yourself,
leng = [figure_this_out_yourself],
r = figure_this_out_yourself,
obj_function = figure_this_out_yourself,
middle = figure_this_out_yourself)
myBinSearchInstance.objtobin(obj_function = max(objectivelist))
There is one important concept to be grasped here: self.
Let us consider this simple example function here, which shall always output a number one larger than last time.
counter = 0
def our_function ():
global counter
counter = counter + 1
return counter
print(our_function())
It is okay as it is, but it uses a global variable to keep track of its state. Imagine using it for two different purposes at the same time. That would be chaos!
So instead, we package this inside a class.
# unfinished apprach
counter = 0
class OurClass:
# This is called a static method
def our_function ():
global counter
counter = counter + 1
return counter
print(our_function())
When we try to run this, we run into a problem.
NameError: name our_function is not defined
This happens because it is now accessible only within that class. So we need to call it as
print(OurClass.our_function())
That makes it okay to have functions with the same name around - as long as they are in different classes - but it does not solve our chaos for using our_function multiple times at once. What we want is basically to have two independent counter variables. This is where instances come into play: Of course we could manually create a second function that uses a second global variable, but that gets out of hand quickly when you use it more and more.
So let's move counter inside our class.
class OurClass:
counter = 0
def our_function ():
global counter
counter = counter + 1
return counter
You guessed it - now counter is no longer defined:
NameError: name counter is not defined
So let us pass the instance variable that we want to use into the function as a parameter. And then use that instance to get its counter:
class OurClass:
counter = 0
def our_function (the_instance):
the_instance.counter = the_instance.counter + 1
return the_instance.counter
myInstance = OurClass()
mySecondInstance = OurClass()
print(OurClass.our_function(myInstance))
print(OurClass.our_function(mySecondInstance))
And successfully, both print statements print 1!
But that is a bit annoying because this the_instance is something that is not like the other arguments. To make it distinct, python allows us to avoid the first parameter and instead provide it as the receiver. Both of these work:
print(myInstance.our_function())
print(OurClass.our_function(mySecondInstance))
Python uses a very strong convention for these parameters. Instead of the_instance, call it self. See Why is self only a convention?.
class OurClass:
counter = 0
def our_function (self):
self.counter = self.counter + 1
return self.counter
myInstance = OurClass()
mySecondInstance = OurClass()
print(myInstance.our_function())
print(mySecondInstance.our_function())
Now we're almost done! Just one thing left to understand: Where do the parameters of __init__() come from?
They are passed to __init__() from the line where we construct it. So let me demonstrate by adding a starting value for our counter:
class OurClass:
counter = 0
def __init__ (self, starting_value):
self.counter = starting_value
def our_function (self):
self.counter = self.counter + 1
return self.counter
myInstance = OurClass(5)
mySecondInstance = OurClass(10)
print(myInstance.our_function())
print(OurClass.our_function(mySecondInstance))
This prints 6 and 11.
But what do those comments mean with #staticmethod? For that, see Difference between staticmethod and classmethod and Do we really need #staticmethod decorator in python to declare static method
.
In short: You can annotate any method in a class with either #staticmethod or #classmethod.
#staticmethod means that it can be called like myInstance.foo() when OurClass.foo() does not take self as a parameter. Without that decorator, you could only call it as OurClass.foo() but not as myInstance.foo().
#classmethod means that it can be called like myInstance.foo() and it does not get myInstance as the first parameter, but instead the class of myInstance, which is OurClass. That allows you e.g. to define alternative constructors. Also, a class method is not inherited when you subclass it, so it won't be mistakenly called.
The comments are pointing out that you could also use a #staticmethod and avoid creating an instance. For that, you would have to not use any variables in the class itself - but you aren't using those for long anyways, so you could all pass them as parameter to the function.
Related
source: Learning python by mark lutz
area of content:page #503
classes versus closures:
It states that " classes may seem better at state retention because they make their memory more explicit with attribute assignments.
closure functions often provide a lighter-weight and viable alternative when retaining state is the only goal. They provide for per-call localized storage for data required by a single nested function.
What does state-rentention mean and how does it make memory more explicit with attribute assignments?
could anyone provide an example which proves more lighter-weight for closure in the case of retaining state and explain what per-localized storage for data mean in the context of single nested function ?
This is a simple closure:
def make_counter(start=0):
count = start - 1
def counter():
nonlocal count # requires 3.x
count += 1
return count
return counter
You call it like this:
>>> counter = make_counter()
>>> counter()
0
>>> counter()
1
>>> # and so on...
As you can see, it keeps track of how many times it's been called. This information is called "state." It is "per-call localized state" because you can make several counters at once, and they will not interfere with each other. In this case, the state is retained (almost) implicitly, based on the closure keeping a reference to the count variable from its enclosing scope. On the other hand, a class would be more explicit:
class Counter:
def __init__(self, start=0):
self.count = start - 1
def __call__(self):
self.count += 1
return self.count
Here, the state is explicitly attached to the object.
I'm using a recursive function to sort a list in Python, and I want to keep track of the number of sorts/merges as the function continues. However, when I declare/initialize the variable inside the function, it becomes a local variable inside each successive call of the function. If I declare the variable outside the function, the function thinks it doesn't exist (i.e. has no access to it). How can I share this value across different calls of the function?
I tried to use the "global" variable tag inside and outside the function like this:
global invcount ## I tried here, with and without the global tag
def inv_sort (listIn):
global invcount ## and here, with and without the global tag
if (invcount == undefined): ## can't figure this part out
invcount = 0
#do stuff
But I cannot figure out how to check for the undefined status of the global variable and give it a value on the first recursion call (because on all successive recursions it should have a value and be defined).
My first thought was to return the variable out of each call of the function, but I can't figure out how to pass two objects out of the function, and I already have to pass the list out for the recursion sort to work. My second attempt to resolve this issue involved me adding the variable invcount to the list I'm passing as the last element with an identifier, like "i27". Then I could just check for the presence of the identifier (the letter i in this example) in the last element and if present pop() it off at the beginning of the function call and re-add it during the recursion. In practice this is becoming really convoluted and while it may work eventually, I'm wondering if there is a more practical or easier solution.
Is there a way to share a variable without directly passing/returning it?
There's couple of things you can do. Taking your example you should modify it like this:
invcount = 0
def inv_sort (listIn):
global invcount
invcount += 1
# do stuff
But this approach means that you should zero invcount before each call to inv_sort.
So actually its better to return invcount as a part of result. For example using tuples like this:
def inv_sort(listIn):
#somewhere in your code recursive call
recursive_result, recursive_invcount = inv_sort(argument)
# this_call_invcount includes recursive_invcount
return this_call_result, this_call_invcount
There's no such thing as an "undefined" variable in Python, and you don't need one.
Outside the function, set the variable to 0. Inside the loop, use the global keyword, then increment.
invcount = 0
def inv_sort (listIn):
global invcount
... do stuff ...
invcount += 1
An alternative might be using a default argument, e.g.:
def inv_sort(listIn, invcount=0):
...
invcount += 1
...
listIn, invcount = inv_sort(listIn, invcount)
...
return listIn, invcount
The downside of this is that your calls get slightly less neat:
l, _ = inv_sort(l) # i.e. ignore the second returned parameter
But this does mean that invcount automatically gets reset each time the function is called with a single argument (and also provides the opportunity to inject a value of invcount if necessary for testing: assert result, 6 == inv_sort(test, 5)).
Assuming that you don't need to know the count inside the function, I would approach this using a decorator function:
import functools
def count_calls(f):
#functools.wraps(f)
def func(*args):
func.count += 1
return f(*args)
func.count = 0
return func
You can now decorate your recursive function:
#count_calls
def inv_sort(...):
...
And check or reset the count before or after calling it:
inv_sort.count = 0
l = inv_sort(l)
print(inv_sort.count)
Consider the following code:
def apples():
print(apples.applecount)
apples.applecount += 1
apples.applecount = 0
apples()
>>> 0
apples()
>>> 1
# etc
Is this a good idea, bad idea or should I just destroy myself?
If you're wondering why I would want this, I got a function repeating itself every 4 seconds, using win32com.client.Dispatch() it uses the windows COM to connect to an application. I think it's unnecessary to recreate that link every 4 seconds.
I could of course use a global variable, but I was wondering if this would be a valid method as well.
It would be more idiomatic to use an instance variable of a class to keep the count:
class Apples:
def __init__(self):
self._applecount = 0
def apples(self):
print(self._applecount)
self._applecount += 1
a = Apples()
a.apples() # prints 0
a.apples() # prints 1
If you need to reference just the function itself, without the a reference, you can do this:
a = Apples()
apples = a.apples
apples() # prints 0
apples() # prints 1
It is basically a namespaced global. Your function apples() is a global object, and attributes on that object are no less global.
It is only marginally better than a regular global variable; namespaces in general are a good idea, after all.
I'm making a game in pygame and I have made an 'abstract' class that's sole job is to store the sprites for a given level (with the intent of having these level objects in a list to facilitate the player being moved from one level to another)
Alright, so to the question. If I can do the equivalent of this in Python(code curtesy of Java):
Object object = new Object (){
public void overriddenFunction(){
//new functionality
};
};
Than when I build the levels in the game I would simply have to override the constructor (or a class/instance method that is responsible for building the level) with the information on where the sprites go, because making a new class for every level in the game isn't that elegant of an answer. Alternatively I would have to make methods within the level class that would then build the level once a level object is instantiated, placing the sprites as needed.
So, before one of the more stanch developers goes on about how anti-python this might be (I've read enough of this site to get that vibe from Python experts) just tell me if its doable.
Yes, you can!
class Foo:
def do_other(self):
print('other!')
def do_foo(self):
print('foo!')
def do_baz():
print('baz!')
def do_bar(self):
print('bar!')
# Class-wide impact
Foo.do_foo = do_bar
f = Foo()
g = Foo()
# Instance-wide impact
g.do_other = do_baz
f.do_foo() # prints "bar!"
f.do_other() # prints "other!"
g.do_foo() # prints "bar!"
g.do_other() # prints "baz!"
So, before one of the more stanch developers goes on about how anti-python this might be
Overwriting functions in this fashion (if you have a good reason to do so) seems reasonably pythonic to me. An example of one reason/way for which you might have to do this would be if you had a dynamic feature for which static inheritance didn't or couldn't apply.
The case against might be found in the Zen of Python:
Beautiful is better than ugly.
Readability counts.
If the implementation is hard to explain, it's a bad idea.
Yes, it's doable. Here, I use functools.partial to get the implied self argument into a regular (non-class-method) function:
import functools
class WackyCount(object):
"it's a counter, but it has one wacky method"
def __init__(self, name, value):
self.name = name
self.value = value
def __str__(self):
return '%s = %d' % (self.name, self.value)
def incr(self):
self.value += 1
def decr(self):
self.value -= 1
def wacky_incr(self):
self.value += random.randint(5, 9)
# although x is a regular wacky counter...
x = WackyCount('spam', 1)
# it increments like crazy:
def spam_incr(self):
self.value *= 2
x.incr = functools.partial(spam_incr, x)
print (x)
x.incr()
print (x)
x.incr()
print (x)
x.incr()
print (x)
and:
$ python2.7 wacky.py
spam = 1
spam = 2
spam = 4
spam = 8
$ python3.2 wacky.py
spam = 1
spam = 2
spam = 4
spam = 8
Edit to add note: this is a per-instance override. It takes advantage of Python's attribute look-up sequence: if x is an instance of class K, then x.attrname starts by looking at x's dictionary to find the attribute. If not found, the next lookup is in K. All the normal class functions are actually K.func. So if you want to replace the class function dynamically, use #Brian Cane's answer instead.
I'd suggest using a different class, via inheritance, for each level.
But you might get some mileage out of copy.deepcopy() and monkey patching, if you're really married to treating Python like Java.
I have a class which represents an object to be kept in a set. I would like the class itself to remember how many it has created so that when you call SetObject() and __init__() a new object is created, which receives a unique index. Maybe something like this
class SetObject(object):
# static class variable
object_counter = 0
def __init__(self, params):
self.params=params
self.index = self.get_index()
def get_index(self):
object_counter += 1
return object_counter-1
a = SetObject(paramsa)
b = SetObject(paramsb)
print a.index
print b.index
would produce
0
1
or something like this. Currently it seems that this approach gives a "variable referenced before assignment" error.
You need to write:
def get_index(self):
SetObject.object_counter += 1
return SetObject.object_counter-1
otherwise it would only work if object_counter was a global variable.
You need to use a reference to the class to refer to it's variables; you could perhaps use a class method (with the #classmethod decorator), but there is really no need to.
Better use itertools.count() to get a fool-proof 'static' counter; no need to reassign back to the class attribute then:
import itertools
class SetObject(object):
object_counter = itertools.count().next
def __init__(self, params):
self.params=params
self.index = self.object_counter()
(code above assumes Python 2; on Python 3 iterables do not have a .next method and you'd need to use functools.partial(next, itertools.count()) instead).
Because the counter is an iterator, we don't need to assign to SetObject.object_counter at all. Subclasses can provide their own counter as needed, or re-use the parent class counter.
The line
object_counter += 1
translates to
object_counter = object_counter + 1
When you assign to a variable inside a scope (e.g. inside a function), Python assumes you wanted to create a new local variable. So it marks object_counter as being local, which means that when you try to get its value (to add one) you get a "not defined" error.
To fix it, tell Python where to look up object_counter. In general you can use the global or nonlocal keywords for this, but in your case you just want to look it up on the class:
self.__class__.object_counter += 1