I have some simple code to find the indexes of 2 elements that add up to a sum. (assume sum exists in list)
class Solution(object):
def twoSum(self, nums, target):
compliment = []
for ind, item in enumerate(nums):
print(ind)
if item in compliment:
return [nums.index(target - item), ind]
compliment.append(target - item)
return [0, 0]
if __name__ == "__main__":
result = Solution()
final = result.twoSum([3, 3], 6)
#Why does this not work without removing the self parameter??
#final = Solution.twoSum([3, 3], 6)
print(str(final))
I'm trying to learn how to instantiate an object best in Python. In my main function, I thought I'd simplify it by doing it in 1 line instead of 2. You can see my 2 attempts to call the function in this class. The 2nd fails, unless I remove the self parameter from the function parameters. It's because I'm trying to pass 2 instead of 3 arguments.
Anyways, I'm confused why my two implementations are different and why one works and the other doesn't. I'm also not sure I even need self here at all. It seems like self is mostly used when you have __init__ and are defining variables for the class? Since I'm not doing that here, do I even need it at all?
The self parameter is only required (and will only work) for instance methods. Instance methods are also the default type. To use it without an instance, and without the self parameter, decorate it as a staticmethod:
class Solution(object):
#staticmethod
def twoSum(nums, target):
compliment = []
for ind, item in enumerate(nums):
print(ind)
if item in compliment:
return [nums.index(target - item), ind]
compliment.append(target - item)
return [0, 0]
if __name__ == "__main__":
final = Solution.twoSum([3, 3], 6)
print(str(final))
You can opt to either decorating your function with as a staticmethod or a classmethod in Python. In the case of the classmethod, you would need to have cls in the method signature. Here is a good discussion to distinguish the two:
What is the difference between #staticmethod and #classmethod?
BTW — for your code, I would highly suggest using a set instead of an array. It will make your code much more efficient. Checking if the target value was already seen in a set is a constant time operation on average.
Related
I have a problem where I need to produce something which is naturally computed recursively, but where I also need to be able to interrogate the intermediate steps in the recursion if needed.
I know I can do this by passing and mutating a list or similar structure. However, this looks ugly to me and I'm sure there must be a neater way, e.g. using generators. What I would ideally love to be able to do is something like:
intermediate_results = [f(x) for x in range(T)]
final_result = intermediate_results[T-1]
in an efficient way. While my solution is not performance critical, I can't justify the massive amount of redundant effort in that first line. It looks to me like a generator would be perfect for this except for the fact that f is fundamentally much more suited to recursion in my case (which at least in my mind is the complete opposite of a generator, but maybe I'm just not thinking far enough outside of the box).
Is there a neat Pythonic way of doing something like this that I just don't know about, or do I just need to just capitulate and pollute my function f by passing it an intermediate_results list which I then mutate as a side-effect?
I have a generic solution for you using a decorator. We create a Memoize class which stores the results of previous times the function is executed (including in recursive calls). If the arguments given have already been seen, the cached versions are used to quickly lookup the result.
The custom class has the benefit over an lru_cache in that you can see the results.
from functools import wraps
class Memoize:
def __init__(self):
self.store = {}
def save(self, fun):
#wraps(fun)
def wrapper(*args):
if args not in self.store:
self.store[args] = fun(*args)
return self.store[args]
return wrapper
m = Memoize()
#m.save
def fibo(n):
if n <= 0: return 0
elif n == 1: return 1
else: return fibo(n-1) + fibo(n-2)
Then after running different things you can see what the cache contains. When you run future function calls, m.store will be used as a lookup so calculation doesn't need to be redone.
>>> f(8)
21
>>> m.store
{(1,): 1,
(0,): 0,
(2,): 1,
(3,): 2,
(4,): 3,
(5,): 5,
(6,): 8,
(7,): 13,
(8,): 21}
You could modify the save function to use the name of the function and the args as the key, so that multiple function results can be stored in the same Memoize class.
You can use your existing solution that makes many "redundant" calls to f, but employ the use of function caching to save the results to previous calls to f.
In other words, when f(x1) is called, it's input arguments and corresponding return values are saved, and the next time it is called, the result is simply pulled from the cache
see functools.lru_cache for the standard library solution to this
ie:
from functools import lru_cache
#lru_cache
intermediate_results = [f(x) for x in range(T)]
final_result = intermediate_results[T-1]
Note, however, f must be a pure function (no side-effects, 1-to-1 mapping) for this to work properly
Having considered your comments, I'll now try to give another perspective on the problem.
So, let's consider a concrete example:
def f(x):
a = 2
return g(x) + a if x != 0 else 0
def g(x):
b = 1
return h(x) - b
def h(x):
c = 1/2
return f(x-1)*(1+c)
I
First of all, it should be mentioned that (in our particular case) the algorithm has form of: f(x) = p(f(x - 1)) for some p. It follows that f(x) = p^x(f(0)) = p^x(0). That means we should just apply p to 0 x times to get the desired result, which can be done in an iterative process, so this can be written without recursion. Though I believe that your real case is much harder. Moreover, it would be too boring and uninformative to stop here)
II
Generally speaking, we can divide all possible solutions into two groups: the ones that require refactoring (i.e. rewriting functions f, g, h) and the ones that do not. I have little to offer from the latter one (and I don't think anyone can). Consider the following, however:
def fk(x, k):
a = 2
return k(gk(x, k) + a if x != 0 else 0)
def gk(x, k):
b = 1
return k(hk(x, k) - b)
def hk(x, k):
c = 1/2
return k(fk(x-1, k)*(1+c))
def printret(x):
print(x)
return x
f(4, printret) # see what happens
Inspired by continuation-passing style, but that's totally not it.
What's the point? It's something between your idea of passing a list to write down all the computations and memoizing. This k carries additional behavior with it, such as printing or writing to list (you can make a function that writes to some list, why not?). But if you look carefully you'll see that it lefts inner code of these functions practically untouched (only input and output to function are affected), so one can produce a decorator associated with a function like printret that does essentially the same thing for f, g, h.
Pros: no need to modify code, much more flexible than passing a list, no additional work (like in memoizing).
Cons: Impure (printing or modifying sth), not so flexible as we would like.
III
Now let's see how modifying function bodies can help. Don't be afraid of what's written below, take your time and play with that thing a little.
class Logger:
def __init__(self, lst, cur_val):
self.lst = lst
self.cur_val = cur_val
def bind(self, f):
res = f(self.cur_val)
return Logger([self.cur_val] + res.lst + self.lst, res.cur_val)
def __repr__(self):
return "Logger( " + repr({'value' : self.cur_val,'lst' : self.lst}) + " )"
def unit(x):
return Logger([], x)
# you can also play with lala
def lala(x):
if x <= 0:
return unit(1)
else:
return lala(x - 1).bind(lambda y: unit(2*y))
def f(x):
a = 2
if x == 0:
return unit(0)
else:
return g(x).bind(lambda y: unit(y + a))
def g(x):
b = 1
return h(x).bind(lambda y: unit(y - b))
def h(x):
c = 1/2
return f(x-1).bind(lambda y: unit(y*(1+c)))
f(4) # see for yourself
Logger is called a monad. I'm not very familiar with this concept myself, but I guess I'm doing everything right) f, g, h are functions that take a number and return a Logger instance. Logger's bind takes in a function (like f) and returns Logger with new value (computed by f) and updated 'logs'. The key point - as I see it - is the ability to do whatever we want with collected functions in the order the resulting value was calculated.
Afterword
I'm not at all some kind of 'guru' of functional programming, I believe I'm missing a lot of things here. But what I've understood is that functional programming is about inversing the flow of the program. That's why, for instance, I totally agree with your opinion about generators being opposed to functional programming. When we use generator gen in, say, function func, we yield values one by one to func and func does sth with them in e.g. a loop. The functional approach would be to make gen a function taking func as a parameter and make func perform computations on 'yielded' values. It's like gen and func exchanged their places. So the flow is inversed! And there are plenty of other ways of inversing the flow. Monads are one of them.
itertools islice gets a generator, start value and stop value. it will give you the elements between the start value and stop value as a generator. if islice is not clear you can check the docs here https://docs.python.org/3/library/itertools.html
intermediate_result = map(f, range(T))
final_result = next(itertools.islice(intermediate_result, start=T-1, stop=T))
I am new to learn python these days. While reading a book, I found a line of code that I can't understand.
Please see line 46 under print_progression() method, print(' '.join(str(next(self)) for j in range(n))).
class Progression:
'''Iterator producing a generic progression.
Default iterator produces the whole number, 0, 1, 2, ...
'''
def __init__(self, start = 0):
'''
Initialize current to the first value of the progression.
'''
self._current = start
def _advance(self):
'''
Update self.current to a new value.
This should be overriden by a subclass to customize progression.
By convension, if current is set to None, this designates the
end of a finite progression.
'''
self._current += 1
def __next__(self):
'''
Return the next element, or else raise StopIteration error.
'''
# Our convention to end a progression
if self._current is None:
raise StopIteration()
else:
# record current value to return
answer = self._current
# advance to prepare for next time
self._advance()
# return the answer
return answer
def __iter__(self):
'''
By convention, an iterator must return itself as an iterator.
'''
return self
def print_progression(self, n):
'''
Print next n values of the progression.
'''
print(' '.join(str(next(self)) for j in range(n)))
class ArithmeticProgression(Progression): # inherit from Progression
pass
if __name__ == '__main__':
print('Default progression:')
Progression().print_progression(10)
'''Output is
Default progression:
0 1 2 3 4 5 6 7 8 9 10'''
I have no idea how next(self) and j works.
I think it should be str(Progression.next()). (solved)
I cannot find j anywhere. What is j for? Why not using while loop such as while Progression.next() <= range(n)?
For my final thought, it should be
print(' '.join(str(next(self)) while next(self) <= range(n)))
Save this newbie.
Thanks in advance!
I think #csevier added a reasonable discussion about your first question, but I'm not sure the second question is answered as clearly for you based on your comments so I'm going to try a different angle.
Let's say you did:
for x in range(10):
print(x)
That's reasonably understandable - you created a list [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] and you printed each of the values in that list in-turn. Now let's say that we wanted to just print "hello" 10 times; well we could modify our existing code very simply:
for x in range(10):
print(x)
print('hello')
Umm, but now the x is messing up our output. There isn't a:
do this 10 times:
print('hello')
syntax. We could use a while loop but that means defining an extra counter:
loop_count = 0
while loop_count < 10:
print('hello')
loop_count += 1
That's baggage. So, the better way would be just to use for x in range(10): and just not bother doing print(x); the value is there to make our loop work, not because it's actually useful in any other way. This is the same for j (though I've used x in my examples because I think you're more likely to encounter it in tutorials, but you could use almost any name you want). Also, while loops are generally used for loops that can run indefinitely, not for iterating over an object with fixed size: see here.
Welcome to the python community! This is a great question. In python, as in other languages, there are many ways to do things. But when you follow a convention that the python community does, that is often referred to as a "pythonic" solution. The method print_progression is a common pythonic solution to iteration of a user defined data structure. In the case above, lets explain first how the code works and then why we would do it that way.
Your print_progression method takes advantage of the fact that your Progression class implements the iteration protocol by implementing the next and iter dunder/magic methods. Because those are implemented you can iterate your class instance both internally as next(self) has done, and externally next(Progression()) which is the exactly what you were getting at with you number 1. Because this protocol is implemented already, this class can by used in any builtin iterator and generator context for any client! Thats a polymorphic solution. Its just used internally as well because you don't need to do it in 2 different ways.
Now for the unused J variable. They are just using that so they can use the for loop. Just using range(n) would just return an itterable but not iterate over it. I dont quite agree with the authors use of the variable named J, its often more common to denote an unused variable that is just used because it needs to be as a single underscore. I like this a little better:
print(' '.join(str(next(self)) for _ in range(n)))
I have a function that works exactly how I want it to, but for my course work, I have to turn this function into a class that:
Must have a function called solveIt,
returns the following two values:
a boolean that is True if you've solved this knapsack problem, and
the knapsack object with the correct values in it.
The class must have a __str__() function that returns a string like this. The first line is the size, and the second is a comma-separated list of the elements:
10
4,1,9,2,0,4,4,4,3,7
I dont understand classes that well, so any help will be appreciated. Here is the function I have right now:
from itertools import combinations
def com_subset_sum(seq, target):
if target == 0 or target in seq:
print(target)
return True
for r in range(len(seq),1,-1):
for subset in combinations(seq, r):
if sum(subset) == target:
print(subset)
return True
return False
print(com_subset_sum([4,1,9,2,0,4,4,4,3,7],10))
One obvious way to transform a function to a class is to turn the function parameters (or some of them) into object attributes. For example:
class Knapsack(object):
def __init__(self, seq, target):
self.seq = seq
self.target = target
self.solution = None
def solveIt(self):
if self.target == 0 or self.target in self.seq:
self.solution = (target,)
return True, self.solution
for r in range(len(self.seq),1,-1):
for subset in combinations(self.seq, r):
if sum(subset) == self.target:
self.solution = subset
return True, self.solution
return False, ()
Now you can do this:
>>> knapsack = Knapsack([4,1,9,2,0,4,4,4,3,7],10)
>>> print(knapsack.solveIt())
(True, (4, 1, 2, 0, 3))
And then, adding a __str__ method is simple:
def __str__(self):
if self.solution is None:
self.solveIt()
return '{}\n{}'.format(len(self.seq),
','.join(map(str, self.solution)))
The reason I added that self.solution is so that calling __str__ over and over won't keep calculating the results over and over. You could just as easily drop that member and write this:
def __str__(self):
solved, solution = self.solveIt()
return '{}\n{}'.format(len(self.seq),
','.join(map(str, solution)))
Either way, I'm not sure how this is better than the function. (In fact, it's strictly worse: with the function, you can always use functools.partial to bind in just the sequence, or both the sequence and the target, or of course bind in neither, whereas with the class, you always have to bind in both.)
Maybe your professor has given you some kind of hints on how you'd want to use this object that might help? Or maybe your professor is just an idiot who doesn't know how to come up with a good motivating assignment for teaching you about classes…
[Sorry, I'm new in Python. Although it seems to be a very basic question, I did my share of due diligence before asking this audience, trying to avoid really stupid questions].
I'm trying to figure out the correct idiom for returning an l-value from a function. Assume I've a container of 64 objects, and I want to be able to return a reference to these objects.
class ChessBoard:
def __init__(self):
self.squares = [None for x in range(64)]
square( row, col ):
return self.squares(row*8+col) <---- I'd like this to be l-value
Then, from outside the class I want to:
board = ChessBoard()
board.square(0,0) = Piece( Shapes.ROOK, Colors.White ) <-- I'm getting an error here
board.square(0,1) = Piece( Shapes.BISHOP, Colors.White )
... etc.
So, I would like the function 'at' to return a lvalue (Something like a reference in C++), but I can't find anything resembling a reference or a pointer in the language. If I stored a list in each square containing one Piece, it is possible I could do something like: board.square(0,0)[0] = Piece - but it seems crazy (or maybe not - as I said, I'm new to the language).
How would you approach this data structure?
In Python, everything is a reference. The only problem is that None is immutable, so you can't use the returned reference to change the value.
You also can't override the assignment operator, so you won't get this particular kind of behaviour. However, a good and very flexible solution would be to override the __setitem__ and __getitem__ methods to implement the subscription operator ([]) for the class:
class ChessBoard(object):
def __init__(self):
self.squares = [None] * 64
def __setitem__(self, key, value):
row, col = key
self.squares[row*8 + col] = value
def __getitem__(self, key):
row, col = key
return self.squares[row*8 + col]
Usage:
>>> c = ChessBoard()
>>> c[1,2] = 5
>>> c[1,2]
5
You can try something like this, at the cost of having to put bogus [:] indexers around:
class Board:
def __init__(self):
self.squares=[None for x in range(64)]
def square(self, row, col):
squares=self.squares
class Prox:
def __getitem__(self, i):
return squares[row*8+col]
def __setitem__(self, i, v):
squares[row*8+col]=v
return Prox()
Then you can do
b=Board()
b.square(2,3)[:]=Piece('Knight')
if b.square(x,y)[:] == Piece('King') ...
And so on. It doesn't actually matter what you put in the []s, it just has to be something.
(Got the idea from the Proxies Perl6 uses to do this)
As Niklas points out, you can't return an l-value.
However, in addition to overriding subscription, you can also use properties (an application of descriptors: http://docs.python.org/howto/descriptor.html) to create an object attribute, which when read from, or assigned to, runs code.
(Not answering your question in the title, but your "How would you approach this data structure?" question:) A more pythonic solution for your data structure would be using a list of lists:
# define a function that generates an empty chess board
make_chess_board = lambda : [[None for x in xrange(8)] for y in xrange(8)]
# grab an instance
b = make_chess_board()
# play the game!
b[0][0] = Piece(Shapes.ROOK, Colors.White)
b[0][1] = Piece(Shapes.BISHOP, Colors.White)
# Or use tuples:
b[0][0] = (Shapes.ROOK, Colors.White)
b[0][1] = (Shapes.BISHOP, Colors.White)
I want to implement a custom list class in Python as a subclass of list. What is the minimal set of methods I need to override from the base list class in order to get full type compatibility for all list operations?
This question suggest that at least __getslice__ needs to be overridden. From further research, also __add__ and __mul__ will be required. So I have this code:
class CustomList(list):
def __getslice__(self,i,j):
return CustomList(list.__getslice__(self, i, j))
def __add__(self,other):
return CustomList(list.__add__(self,other))
def __mul__(self,other):
return CustomList(list.__mul__(self,other))
The following statements work as desired, even without the overriding methods:
l = CustomList((1,2,3))
l.append(4)
l[0] = -1
l[0:2] = CustomList((10,11)) # type(l) is CustomList
These statements work only with the overriding methods in the above class definition:
l3 = l + CustomList((4,5,6)) # type(l3) is CustomList
l4 = 3*l # type(l4) is CustomList
l5 = l[0:2] # type(l5) is CustomList
The only thing I don't know how to achieve is making extended slicing return the right type:
l6 = l[0:2:2] # type(l6) is list
What do I need to add to my class definition in order to get CustomList as type of l6?
Also, are there other list operations other than extended slicing, where the result will be of list type instead of CustomList?
Firstly, I recommend you follow Björn Pollex's advice (+1).
To get past this particular problem (type(l2 + l3) == CustomList), you need to implement a custom __add__():
def __add__(self, rhs):
return CustomList(list.__add__(self, rhs))
And for extended slicing:
def __getitem__(self, item):
result = list.__getitem__(self, item)
try:
return CustomList(result)
except TypeError:
return result
I also recommend...
pydoc list
...at your command prompt. You'll see which methods list exposes and this will give you a good indication as to which ones you need to override.
You should probably read these two sections from the documentation:
Emulating container types
Additional methods for emulating sequence types (Python 2 only)
Edit: In order to handle extended slicing, you should make your __getitem__-method handle slice-objects (see here, a little further down).
Possible cut-the-gordian-knot solution: subclass UserList instead of list. (Worked for me.) That is what UserList is there for.
As a slight modification to Johnsywebs answer. I would only convert to a CustomList if item is a slice. Otherwise CustomList(["ab"])[0] would give you CustomList(["a", "b"]) which is not what you want. Like this:
def __getitem__(self, item):
result = list.__getitem__(self, item)
if type(item) is slice:
return CustomList(result)
else:
return result