Im having a problem with python.. I have a binary tree node type:
class NODE:
element = 0
leftchild = None
rightchild = None
And I had to implement a function deletemin:
def DELETEMIN( A ):
if A.leftchild == None:
retval = A.element
A = A.rightchild
return retval
else:
return DELETEMIN( A.leftchild )
Yet, when I try to test this on the binary tree:
1
/ \
0 2
It should delete 0, by just setting it to null but instead, i get this:
0
/ \
0 2
Why can I not nullify a node within a function in python? Is their a way to do this?
Python passes arguments by object-reference, just like java, not by variable-reference. When you assign to a local variable (including an argument) to a new value, you're changing only the local variable, nothing else (don't confuse that with calling mutators or assigning to ATTRIBUTES of objects: we're talking about assignments to barenames).
The preferred solution in Python is generally to return multiple values, as many as you need, and assign them appropriately in the caller. So deletemin would return two values, the current returnval and the modified node, and the caller would assign the latter as needed. I.e.:
def DELETEMIN( A ):
if A.leftchild is None:
return A.element, A.rightchild
else:
return DELETEMIN( A.leftchild )
and in the caller, where you previously had foo = DELETEMIN( bar ), you'd use instead
foo, bar = DELETEMIN( bar )
Peculiar capitalization and spacing within parentheses, BTW, but that's another issue;-).
There is no way to get "a pointer or reference to a caller's barename" (in either Python or Java) in the way you could, e.g., in C or C++. There are other alternative approaches, but they require different arrangements than you appear to prefer, so I recommend the multiple return values approach as here indicated.
class Node:
element = 0;
left_child = None
right_child = None
def delete_min( A ):
if A.left_child is None:
return A.right_child
else:
A.left_child = delete_min(A.left_child)
return A
tree = delete_min(tree)
Related
Given a binary tree, check whether it is a mirror of itself (ie, symmetric around its center).
Question link is here
The recursion method need to traverse the tree twice.
But one of the comment provided a solution used a technique called 'Null check'. I can't understand why in this way can we avoid checking the tree twice?
Here is his code in C++:
bool isSymmetric(TreeNode* root) {
if (!root) return true;
return isSymmetric(root->left, root->right);
}
bool isSymmetric(TreeNode* t1, TreeNode* t2){
if (!t1 && !t2) return true;
if (!t1 || !t2) return false;
return t1->val == t2->val
&& isSymmetric(t1->left, t2->right)
&& isSymmetric(t1->right, t2->left);
}
I have also tried to modify it into python3 and my code also passed all test cases!
Here is my code:
# Definition for a binary tree node.
# class TreeNode:
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
class Solution:
def isSymmetric(self, root):
return self.helper(root)
def helper(self,root):
if root is None:
return True
#why we can redefine the helper here?
def helper(left,right):
if left is None and right is None:
return True
if left is None or right is None:
return False
return left.val==right.val and helper(left.left,right.right) and helper(left.right,right.left)
return helper(root.left,root.right)
I have never met such kind of recursion before.
(1) Why we can redefine the function helper with different arguments in helper function itself?
(2) My intuition tells me that helper function will stop execution once it returns back to the root thus the tree won't be checked twice. But I don't know why.
A def statement is really just a fancy assignment statement. In Solution.helper, you are defining a local variable named helper that is bound to another function. As a result, all references inside Solution.helper and the local function to the name helper resolve to the local function.
Solution.helper is not a recursive function; only the local function is. You could write the same thing (less confusingly but equivalently) as
class Solution:
def isSymmetric(self, root):
return self.helper(root)
def helper(self,root):
if root is None:
return True
def helper2(left,right):
if left is None and right is None:
return True
if left is None or right is None:
return False
return left.val==right.val and helper2(left.left,right.right) and helper2(left.right,right.left)
return helper2(root.left,root.right)
The role of function isSymmetric(TreeNode* root is pretty simple. First, it returns true if the tree is empty, and if it's not, it checks if its left child is a mirror of its right child, which happens in the isSymmetric(TreeNode* t1, TreeNode* t2). So let's try to understand how the second function works. It is essentially designed to take two trees and check if they are mirrors of each other. How? First, it does the obvious checks. If one is null and the other is not, it returns false, and if both are null it returns true. The interesting part happens when both are potentially trees. It suffices that the left child of one is the mirror of the right child of the other and vice versa. You can draw a tree to see why this is the case. A schema should be self-explanatory.
Edit:
This question has been marked duplicate but I don't think that it is. Implementing the suggested answer, that is to use the Mapping abc, does not have the behavior I would like:
from collections import Mapping
class data(Mapping):
def __init__(self,params):
self.params = params
def __getitem__(self,k):
print "getting",k
return self.params[k]
def __len__(self):
return len(self.params)
def __iter__(self):
return ( k for k in self.params.keys() )
def func(*args,**kwargs):
print "In func"
return None
ps = data({"p1":1.,"p2":2.})
print "\ncalling...."
func(ps)
print "\ncalling...."
func(**ps)
Output:
calling....
In func
calling....
in __getitem__ p2
in __getitem__ p1
In func
Which, as mentioned in the question, is not what I want.
The other solution, given in the comments, is to modify the routines that are causing problems. That will certainly work, however I was looking for a quick (lazy?) fix!
Question:
How can I implement the ** operator for a class, other than via __getitem__? For example I would like to be able to do this::
def func(**kwargs):
<do some clever stuff>
x = some_generic_class():
func( **x )
without an implicit call to some_generic_class.__getitem__(). In my application I have already implemented __getitem__ with some data logging which I do not want to perform when the class is referenced as above.
If it's not possible to overload the ** operator, is it possible to detect when __getitem__ is being called as a result of the class being passed to a function, rather than explicitly?
Background:
I am working on a physics model that is built out of a set of packages which are chosen according to user input at runtime. The flexible structure of the model means that I rarely know the required parameters and so i pass a dict of parameter names and values between the models. In order to make this more user friendly I am now trying to develop a class paramlist that overloads the dict functionality with a set of routines that do some consistency checking, set default values, etc. The idea is that I pass an instance of paramlist rather than a dict. One of the more important aims is to keep a log of which members of paramlist have been referenced by the physics packages and which ones have not. A stripped out version is below, which aims to maintain a second dict that logs whether a parameter has been referenced::
class paramlist(object):
def __init__( self, params ):
self.params = copy(params)
self.used = { k:False for k in self.params }
def __getitem__(self, k):
try:
v = self.params[k]
except KeyError:
raise KeyError("Parameter {} not in parameter list".format(k))
else:
self.used[k] = True
return v
def __setitem__(self,k,v):
self.params[k] = v
self.used[k] = False
Which does not have the behaviour I want:
ps = paramlist( {"p1":1.} )
def donothing( *args, **kwargs ):
return None
donothing(ps)
print paramlist.used["p1"]
donothing(**ps)
print paramlist.used["p1"]
Output:
False
True
I would like the use dict to remain False in both cases, so that I can tell the user that one of their parameters was not used (implying that they screwed up and a default value has been used instead). I presume that the ** case has the effect of calling __getitem__ on every entry in the paramlist.
I am working with 2 data sets on the order of ~ 100,000 values. These 2 data sets are simply lists. Each item in the list is a small class.
class Datum(object):
def __init__(self, value, dtype, source, index1=None, index2=None):
self.value = value
self.dtype = dtype
self.source = source
self.index1 = index1
self.index2 = index2
For each datum in one list, there is a matching datum in the other list that has the same dtype, source, index1, and index2, which I use to sort the two data sets such that they align. I then do various work with the matching data points' values, which are always floats.
Currently, if I want to determine the relative values of the floats in one data set, I do something like this.
minimum = min([x.value for x in data])
for datum in data:
datum.value -= minimum
However, it would be nice to have my custom class inherit from float, and be able to act like this.
minimum = min(data)
data = [x - minimum for x in data]
I tried the following.
class Datum(float):
def __new__(cls, value, dtype, source, index1=None, index2=None):
new = float.__new__(cls, value)
new.dtype = dtype
new.source = source
new.index1 = index1
new.index2 = index2
return new
However, doing
data = [x - minimum for x in data]
removes all of the extra attributes (dtype, source, index1, index2).
How should I set up a class that functions like a float, but holds onto the extra data that I instantiate it with?
UPDATE: I do many types of mathematical operations beyond subtraction, so rewriting all of the methods that work with a float would be very troublesome, and frankly I'm not sure I could rewrite them properly.
I suggest subclassing float and using a couple decorators to "capture" the float output from any method (except for __new__ of course) and returning a Datum object instead of a float object.
First we write the method decorator (which really isn't being used as a decorator below, it's just a function that modifies the output of another function, AKA a wrapper function):
def mydecorator(f,cls):
#f is the method being modified, cls is its class (in this case, Datum)
def func_wrapper(*args,**kwargs):
#*args and **kwargs are all the arguments that were passed to f
newvalue = f(*args,**kwargs)
#newvalue now contains the output float would normally produce
##Now get cls instance provided as part of args (we need one
##if we're going to reattach instance information later):
try:
self = args[0]
##Now check to make sure new value is an instance of some numerical
##type, but NOT a bool or a cls type (which might lead to recursion)
##Including ints so things like modulo and round will work right
if (isinstance(newvalue,float) or isinstance(newvalue,int)) and not isinstance(newvalue,bool) and type(newvalue) != cls:
##If newvalue is a float or int, now we make a new cls instance using the
##newvalue for value and using the previous self instance information (arg[0])
##for the other fields
return cls(newvalue,self.dtype,self.source,self.index1,self.index2)
#IndexError raised if no args provided, AttributeError raised of self isn't a cls instance
except (IndexError, AttributeError):
pass
##If newvalue isn't numerical, or we don't have a self, just return what
##float would normally return
return newvalue
#the function has now been modified and we return the modified version
#to be used instead of the original version, f
return func_wrapper
The first decorator only applies to a method to which it is attached. But we want it to decorate all (actually, almost all) the methods inherited from float (well, those that appear in the float's __dict__, anyway). This second decorator will apply our first decorator to all of the methods in the float subclass except for those listed as exceptions (see this answer):
def for_all_methods_in_float(decorator,*exceptions):
def decorate(cls):
for attr in float.__dict__:
if callable(getattr(float, attr)) and not attr in exceptions:
setattr(cls, attr, decorator(getattr(float, attr),cls))
return cls
return decorate
Now we write the subclass much the same as you had before, but decorated, and excluding __new__ from decoration (I guess we could also exclude __init__ but __init__ doesn't return anything, anyway):
#for_all_methods_in_float(mydecorator,'__new__')
class Datum(float):
def __new__(klass, value, dtype="dtype", source="source", index1="index1", index2="index2"):
return super(Datum,klass).__new__(klass,value)
def __init__(self, value, dtype="dtype", source="source", index1="index1", index2="index2"):
self.value = value
self.dtype = dtype
self.source = source
self.index1 = index1
self.index2 = index2
super(Datum,self).__init__()
Here are our testing procedures; iteration seems to work correctly:
d1 = Datum(1.5)
d2 = Datum(3.2)
d3 = d1+d2
assert d3.source == 'source'
L=[d1,d2,d3]
d4=max(L)
assert d4.source == 'source'
L = [i for i in L]
assert L[0].source == 'source'
assert type(L[0]) == Datum
minimum = min(L)
assert [x - minimum for x in L][0].source == 'source'
Notes:
I am using Python 3. Not certain if that will make a difference for you.
This approach effectively overrides EVERY method of float other than the exceptions, even the ones for which the result isn't modified. There may be side effects to this (subclassing a built-in and then overriding all of its methods), e.g. a performance hit or something; I really don't know.
This will also decorate nested classes.
This same approach could also be implemented using a metaclass.
The problem is when you do :
x - minimum
in terms of types you are doing either :
datum - float, or datum - integer
Either way python doesn't know how to do either of them, so what it does is look at parent classes of the arguments if it can. since datum is a type of float, it can easily use float - and the calculation ends up being
float - float
which will obviously result in a 'float' - python has no way of knowing how to construct your datum object unless you tell it.
To solve this you either need to implement the mathematical operators so that python knows how to do datum - float or come up with a different design.
Assuming that 'dtype', 'source', index1 & index2 need to stay the same after a calculation - then as an example your class needs :
def __sub__(self, other):
return datum(value-other, self.dtype, self.source, self.index1, self.index2)
this should work - not tested
and this will now allow you to do this
d = datum(23.0, dtype="float", source="me", index1=1)
e = d - 16
print e.value, e.dtype, e.source, e.index1, e.index2
which should result in :
7.0 float me 1 None
I want two objects to share a single string object. How do I pass the string object from the first to the second such that any changes applied by one will be visible to the other? I am guessing that I would have to wrap the string in a sort of buffer object and do all sorts of complexity to get it to work.
However, I have a tendency to overthink problems, so undoubtedly there is an easier way. Or maybe sharing the string is the wrong way to go? Keep in mind that I want both objects to be able to edit the string. Any ideas?
Here is an example of a solution I could use:
class Buffer(object):
def __init__(self):
self.data = ""
def assign(self, value):
self.data = str(value)
def __getattr__(self, name):
return getattr(self.data, name)
class Descriptor(object):
def __get__(self, instance, owner):
return instance._buffer.data
def __set__(self, instance, value):
if not hasattr(instance, "_buffer"):
if isinstance(value, Buffer):
instance._buffer = value
return
instance._buffer = Buffer()
instance._buffer.assign(value)
class First(object):
data = Descriptor()
def __init__(self, data):
self.data = data
def read(self, size=-1):
if size < 0:
size = len(self.data)
data = self.data[:size]
self.data = self.data[size:]
return data
class Second(object):
data = Descriptor()
def __init__(self, data):
self.data = data
def add(self, newdata):
self.data += newdata
def reset(self):
self.data = ""
def spawn(self):
return First(self._buffer)
s = Second("stuff")
f = s.spawn()
f.data == s.data
#True
f.read(2)
#"st"
f.data
# "uff"
f.data == s.data
#True
s.data
#"uff"
s._buffer == f._buffer
#True
Again, this seems like absolute overkill for what seems like a simple problem. As well, it requires the use of the Buffer class, a descriptor, and the descriptor's impositional _buffer variable.
An alternative is to put one of the objects in charge of the string and then have it expose an interface for making changes to the string. Simpler, but not quite the same effect.
I want two objects to share a single
string object.
They will, if you simply pass the string -- Python doesn't copy unless you tell it to copy.
How do I pass the string object from
the first to the second such that any
changes applied by one will be visible
to the other?
There can never be any change made to a string object (it's immutable!), so your requirement is trivially met (since a false precondition implies anything).
I am guessing that I would have to
wrap the string in a sort of buffer
object and do all sorts of complexity
to get it to work.
You could use (assuming this is Python 2 and you want a string of bytes) an array.array with a typecode of c. Arrays are mutable, so you can indeed alter them (with mutating methods -- and some operators, which are a special case of methods since they invoke special methods on the object). They don't have the myriad non-mutating methods of strings, so, if you need those, you'll indeed need a simple wrapper (delegating said methods to the str(...) of the array that the wrapper also holds).
It doesn't seem there should be any special complexity, unless of course you want to do something truly weird as you seem to given your example code (have an assignment, i.e., a *rebinding of a name, magically affect a different name -- that has absolutely nothing to do with whatever object was previously bound to the name you're rebinding, nor does it change that object in any way -- the only object it "changes" is the one holding the attribute, so it's obvious that you need descriptors or other magic on said object).
You appear to come from some language where variables (and particularly strings) are "containers of data" (like C, Fortran, or C++). In Python (like, say, in Java), names (the preferred way to call what others call "variables") always just refer to objects, they don't contain anything except exactly such a reference. Some objects can be changed, some can't, but that has absolutely nothing to do with the assignment statement (see note 1) (which doesn't change objects: it rebinds names).
(note 1): except of course that rebinding an attribute or item does alter the object that "contains" that item or attribute -- objects can and do contain, it's names that don't.
Just put your value to be shared in a list, and assign the list to both objects.
class A(object):
def __init__(self, strcontainer):
self.strcontainer = strcontainer
def upcase(self):
self.strcontainer[0] = self.strcontainer[0].upper()
def __str__(self):
return self.strcontainer[0]
# create a string, inside a shareable list
shared = ['Hello, World!']
x = A(shared)
y = A(shared)
# both objects have the same list
print id(x.strcontainer)
print id(y.strcontainer)
# change value in x
x.upcase()
# show how value is changed in both x and y
print str(x)
print str(y)
Prints:
10534024
10534024
HELLO, WORLD!
HELLO, WORLD!
i am not a great expert in python, but i think that if you declare a variable in a module and add a getter/setter to the module for this variable you will be able to share it this way.
I am always annoyed by this fact:
$ cat foo.py
def foo(flag):
if flag:
return (1,2)
else:
return None
first, second = foo(True)
first, second = foo(False)
$ python foo.py
Traceback (most recent call last):
File "foo.py", line 8, in <module>
first, second = foo(False)
TypeError: 'NoneType' object is not iterable
The fact is that in order to correctly unpack without troubles I have either to catch the TypeError or to have something like
values = foo(False)
if values is not None:
first, second = values
Which is kind of annoying. Is there a trick to improve this situation (e.g. to so set both first and second to None without having foo returning (None, None)) or a suggestion about the best design strategy for cases like the one I present ? *variables maybe ?
Well, you could do...
first,second = foo(True) or (None,None)
first,second = foo(False) or (None,None)
but as far as I know there's no simpler way to expand None to fill in the entirety of a tuple.
I don't see what is wrong with returning (None,None). It is much cleaner than the solutions suggested here which involve far more changes in your code.
It also doesn't make sense that you want None to automagically be split into 2 variables.
I think there is a problem of abstraction.
A function should maintain some level of abstraction, that helps in reducing complexity of the code.
In this case, either the function is not maintaining the right abstraction, either the caller is not respecting it.
The function could have been something like get_point2d(); in this case, the level of the abstraction is on the tuple, and therefore returning None would be a good way to signal some particular case (e.g. non-existing entity). The error in this case would be to expect two items, while actually the only thing you know is that the function returns one object (with information related to a 2d point).
But it could also have been something like get_two_values_from_db(); in this case the abstraction would be broken by returning None, because the function (as the name suggest) should return two values and not one!
Either way, the main goal of using a function - reducing complexity - is, at least partially, lost.
Note that this issue would not appear clearly with the original name; that's also why it is always important to give good names to function and methods.
I don't think there's a trick. You can simplify your calling code to:
values = foo(False)
if values:
first, second = values
or even:
values = foo(False)
first, second = values or (first_default, second_default)
where first_default and second_default are values you'd give to first and second as defaults.
How about this:
$ cat foo.py
def foo(flag):
if flag:
return (1,2)
else:
return (None,)*2
first, second = foo(True)
first, second = foo(False)
Edit: Just to be clear, the only change is to replace return None with return (None,)*2. I am extremely surprised that no one else has thought of this. (Or if they have, I would like to know why they didn't use it.)
You should be careful with the x or y style of solution. They work, but they're a bit broader than your original specification. Essentially, what if foo(True) returns an empty tuple ()? As long as you know that it's OK to treat that as (None, None), you're good with the solutions provided.
If this were a common scenario, I'd probably write a utility function like:
# needs a better name! :)
def to_tup(t):
return t if t is not None else (None, None)
first, second = to_tup(foo(True))
first, second = to_tup(foo(False))
def foo(flag):
return ((1,2) if flag else (None, None))
OK, I would just return (None, None), but as long as we are in whacko-land (heh), here is a way using a subclass of tuple. In the else case, you don't return None, but instead return an empty container, which seems to be in the spirit of things. The container's "iterator" unpacks None values when empty. Demonstrates the iterator protocol anyway...
Tested using v2.5.2:
class Tuple(tuple):
def __iter__(self):
if self:
# If Tuple has contents, return normal tuple iterator...
return super(Tuple, self).__iter__()
else:
# Else return a bogus iterator that returns None twice...
class Nonerizer(object):
def __init__(self):
self.x=0
def __iter__(self):
return self
def next(self):
if self.x < 2:
self.x += 1
return None
else:
raise StopIteration
return Nonerizer()
def foo(flag):
if flag:
return Tuple((1,2))
else:
return Tuple() # It's not None, but it's an empty container.
first, second = foo(True)
print first, second
first, second = foo(False)
print first, second
Output is the desired:
1 2
None None
Over 10 years later, if you want to use default values I don't think there is a better way than the one already provided:
first, second = foo(False) or (first_default, second_default)
However, if you want to skip the case when None is returned, starting from Python 3.8 you can use the walrus operator (ie. assignment expressions) - also note the simplified foo:
def foo(flag):
return (1, 2) if flag else None
if values := Foo(False):
(first, second) = values
You could use an else branch to assign default values that's worse than the previous or option.
Sadly, the walrus operator does not support unparenthesized tuples so it is just a one line gain compared to:
values = foo(False)
if values:
first, second = values
One mechanism you can use to avoid the problem entirely when you have control of the method foo is to change the prototype to allow giving a default. This works if you are wrapping state but can't guarantee that a particular tuple value exists.
# self.r is for example, a redis cache
# This method is like foo -
# it has trouble when you unpack a json serialized tuple
def __getitem__(self, key):
val = self.r.get(key)
if val is None:
return None
return json.loads(val)
# But this method allows the caller to
# specify their own default value whether it be
# (None, None) or an entire object
def get(self, key, default):
val = self.r.get(key)
if val is None:
return default
return json.loads(val)
I found a solution for this problem:
Return None or return an object.
However you don't want to have to write a class just to return an object. For this, you can use a named tuple
Like this:
from collections import namedtuple
def foo(flag):
if flag:
return None
else:
MyResult = namedtuple('MyResult',['a','b','c']
return MyResult._make([1,2,3])
And then:
result = foo(True) # result = True
result = foo(False) # result = MyResult(a=1, b=2, c=3)
And you have access to the results like this:
print result.a # 1
print result.b # 2
print result.c # 3